Over at [Dispatches][dispatches], Ed Brayton has been shredding my old friend Sal Cordova.
Ed does a great job arguing that intelligent design is a PR campaign, and not
a field of scientific research. Ed does a fine job with the argument; you should definitely click on over to take a look. But Sal showed up in the comments to defend himself, and made
some statements that I just can’t resist mocking for their shallow stupidity and utter foolishness.
[dispatches]: http://scienceblogs.com/dispatches/2007/01/answering_cordova_on_ids_goals.php
Let’s start with a mangled metaphor from [here][comment-turing]:
>The theory is that a fundamental component of life, the self-replicating Turing Machine, will
>not arise from undesigned primordial elements. Easy enough to falsify.
Life is not a turing machine. Sal, as usual, is trying to throw around terminology to
make it look like he’s saying something deep, when in fact he isn’t. Throughout the
comment thread, he continually invokes computer science, computation, and Turing machines as
part of his argument.
DNA can be used to *construct* a Turing-equivalent computing device. But is it correct to say
that *life* is a turing machine? Clearly not. Is it correct to say that life is a
Turing-equivalent computing device? No. Is it even correct to say that living cells *contain* a Turing-equivalent computing device? Not clear. Cells do a lot of amazing things, but we have not yet developed a sufficiently complete understanding of the internal biochemical processes of a living cell to be able to determine if, modeled as a computing device, living things actually are actually doing anything that requires the equivalent of Turing-complete computation.
A metaphor will help clear up what I’m saying. [Conway’s game of life][life], the cellular automata, [is a Turing complete computing system][life-turing]. But *most* actual Life automata grids *do not* perform any Turing complete computations. You can’t grab a life grid that implements a glider gun (like the pattern to the right), and say that the glider gun is
a turing machine, or that it’s Turing complete just because the underlying automata is; it’s not doing anything TC.
But Sal doesn’t get that.
Further down the comments, [Sal comes up again with another boner; this time, a cleverly circular argument to weasel out of admitting the successes of genetic algorithms][ga]:
>And as I’ll point out genetic algorithms in the computer industry are products of design
>running in designed environments. Man-made GA’s hardly constitute a counter-example to the
>questions I pose, and if anything tend to strenghthen the arguments.
>
>Appealing to intelligently designed as proof that mindless forces can design is disingenous
>at best.
>
>Furthermore, the evoltuionary biolgists must give reasonable accounts as to why the known
>physical world of the present and past should be modelled like an intelligently designed
>Gentic Algorithm.
It’s a common criticism from ID folks that genetic algorithms don’t count, because they “run in designed environments”, and therefore strengthen the argument for design. Nonsense, but
convincing to a layman. The basic ID argument comes down to the idea that when we run a GA, we’re “smuggling” design information into the system, and so it’s not the same as biological evolution, where we claim that there is no agent intervening in the process in a way that inserts information. The thing is, there are some pretty damned brilliant GAs out there, which *don’t* always do what we want. They satisfy the selection criteria, but not always the way we
would want them to; not even always in ways that we *understand*. And there are plenty of examples of GA systems that are trying to emulate life, and end up evolving things like symbiotic relationships, parasites, etc – things which the builders of the GA system certainly
didn’t hard-wire into the system.
But what’s more interesting is taking a look at what he’s saying on a deeper level.
First, he argues that GAs *strengthen* the ID claim, because they smuggle information in by way of the design of the environment. That is, he’s making the argument that while evolution appears to be doing things, it only works because of an outside agent’s intervention.
Then he turns around, and says we need to explain why the natural environment in which biological evolution occurs looks so much like a GA system.
There’s a circle in there, if you think about it. GA systems *are modeled to emulate
biological evolution*. Sal wants us to explain *why biological evolution looks like man-made GA systems*. Biological systems look like man-made GA systems, because man-made GA systems are made to look like biological systems.
That is, because we designed systems to emulate what we observe in nature, we need to explain
*why* the things in nature look like the things we designed.
That circle is a key part of the paragraph before: the reason that we must be smuggling in information is because the information can’t be created randomly. The information can’t be created randomly, because the process by which it’s created looks just like the system we designed, which only works because we’re smuggling in information through the design.
There’s really nothing there but the circle: when we imitate nature, we can’t do it without cheating; nature looks like what we did; therefore nature must be cheating, because what happens in nature looks like what we did, and since we know that we must have been cheating, then nature must be cheating because it’s doing the same thing that we did when we were cheating.
One final stupidity from Sal – a classic case of how IDists like to abuse [information
theory][it], and [deliberately mix Kolmogorov-Chaitin and Shannon theories.][shannon]:
>And that’s just for starters. In information science we have the concepts of information
>storage capacity and channel capacity. I’m afraid, even when these rudimentary questions are
>posed to evolutionary biologists we get indications they have hardly looked into the issue,
>or when they do, they find it incompatible with the prevailing paradigm. Rather than
>admitting the paradigm could be totally wrong, they sweep the problem under the rug and
>obfuscate it into oblivion….
Information storage capacity and channel capacity are concepts form Shannon theory. But
when IDists talk about the generation of information in biological systems, they’re using
the Kolmogorov-Chaitin formulation of information theory. When Sal says that biologists brush aside questions about information storage and channel capacities because they
“find it incompatible with the prevailing paradigm”, what he’s really saying is “When we ask biologists a question about the channel capacity of biological information, they respond by saying ”Sorry, channel capacity is a Shannon theory question, but we aren’t studying
information in terms of Shannon theory, we’re using Kolmogorov theory, because Shannon isn’t applicable to what we’re studying!””
When a scientist answers a question by explaining that the question makes no sense, Sal claims victory.
[shannon]: http://scienceblogs.com/dispatches/2007/01/answering_cordova_on_ids_goals.php?utm_source=mostactive&utm_medium=link#comment-305098
[ga]: http://scienceblogs.com/dispatches/2007/01/answering_cordova_on_ids_goals.php?utm_source=mostactive&utm_medium=link#comment-304738
[life]: http://www.ibiblio.org/lifepatterns/
[life-turing]: http://rendell.server.org.uk/gol/tm.htm
[comment-turing]: http://scienceblogs.com/dispatches/2007/01/answering_cordova_on_ids_goals.php?utm_source=mostactive&utm_medium=link#comment-304533
[it]: http://scienceblogs.com/goodmath/2006/06/an_introduction_to_information.php
Silicon can with slight traces of other elements be used to make transistors, some arrangements of which can act as Universal Turing Machines (neglecting complications of limited memory size and so forth). This does not mean that all collections of arbitrarily connected transistors, let alone all possible configurations of silicon atoms, are “Turing complete”.
Sal would claim victory regardless of the extent to which he has been demonstrated to be wrong, circular, clueless, or just plain lying.
Damn you’re fast!
I already posted this in the last post you had, but Sal in that comment thread also said some things directly about you and your takedown of Voie, calling your rebuttal all a big straw man.
http://scienceblogs.com/dispatches/2007/01/answering_cordova_on_ids_goals.php#comment-305622
Isn’t Sal is just a pathetic puppet that parrots Buffalo Bill Dembski?
John wrote:
This is what you have to do when the idea you defend is fundamentally untrue. As I’ve said before, the simplest way to defend an untruth is to lie. If you adopt methods of dishonesty as your career, then I can only imagine that you inure yourself to worse sins in all aspects of your life. Creationism breeds immorality.
Mark my words: it’ll all come out in the IRS audits or in the divorce courts.
I have a feeling I’m going to be referencing the heck out of this post: I run into a lot of people who make the same circular complaint.
ID Creationists seem to get excited about biomimetic design, or bionics, in general. They often don’t seem to understand that human designs based on natural or even biological design weakens their argument, not strengthens it, for just the reasons of circularity you describe.
Examples:
Biomimetics: A Subdiscipline of ID; Lenny Susskind; and Chauvin Critiques Darwin
Biomimetics — A Subdiscipline of ID by William Dembski
Strictly speaking, that thing on your desk isn’t Turing-equivalent either; it doesn’t have infinite memory. But that just reinforces that you don’t need Turing-equivalence to get useful computational work done.
Sal has been making this same “argument” for several years. He has on occasion made it in a way that pretty clearly implies that since we can intelligently design hurricane simulations, hurricanes must be intelligently designed.
It’s also the case that a Turing machine is a pretty simple physical device (the infinite memory requirement aside).
Furthermore, the evoltuionary biolgists must give reasonable accounts as to why the known physical world of the present and past should be modelled like an intelligently designed Gentic Algorithm.
While I do not agree with Sal in any way, I don’t agree that this is a circular statement. I think what he is saying is that even though genetic algorithms (GA’s) are designed to emulate natural selection, where is the argument that these algorithms are a valid model of what is really going on in nature. He does not ask why nature behaves like a GA. He is saying that just because a model can be made to look like reality, where is the proof that reality works like the model? Kind of like all the epicycles added to the geocentric model of the solar system. That model could reproduce the observed motions of the planets but ultimately was not a correct model.
IIRC, a Turing machine doesn’t need an infinite amount of memory at any given time; we just have to be able to add new memory as needed. At least, this is the way I remember Feynman setting up the problem in his Lectures on Computation. Probably not a point worth fussing over, I know. . . .
Incredibly, in writing my previous comment, I forgot to link to this xkcd strip.
SteveM:
Yes, he *does* ask why nature acts like a GA. Look at his own words: “evolutionary biologists must give reasonable accounts as to why the known physical world should be modelled like an intelligently designed genetic algorithm.”
He’s *not* asking for an explanation of why GAs are an accurate model of the real world – he’s asking why does the real world behave like the GA that humans designed to imitate the real world?”
A bit off topic, but here’s a wacky site full of science related crackpottery:
http://www.rebelscience.org
The COSA system in particular is intriguing in its inanity.
Forgive my conflating Darwinism with the Origin of Life question, but since they are related in terms of overall ID thinking, it seems suitable here for me to address both:
If the Universe is in fact only 10 to 20 billion years old, then it’s at least rationally possible ask if sheer chance accumulations of the building blocks of life resulting in the complexity of DNA and cellular structures might not have occured by sheer chance in the allowed time frame; some kind of teleological force might be at work to accomplish DNA in only 20 billion years.
Of course, one might recognize that the big bang isn’t really an explosive “beginning” at all (instead, it can be seen as the perceptual limit of our instruments and imaginations to the infinitely small). Or, if you believe that the universe’s expansion that we see is merely local and temporary, then the resulting possibly infinite time frame could allow for life easily via sheer chance combinations of particles. Or, if you’re amenable to the Many Worlds interpretation of Quantum Dynamics, this universe is but one of a possibly infinite # of universes in which every highly improbable circumstance will exist in some universe somewhere.
And “intelligent design” need not refer to any anthropomorphic “God” (christian or not) at all. It may instead refer to an inbuilt property of chaos science: given the right circumstances, entirely unpredictable emergent properties may manifest. Life and human consciousness and teleology may be among these properties.
So although it’s true that the superstitious religionists who automatically assume that teleology must refer to their particular god of choice are clearly exhibiting an agenda-driven pov, I think it’s important to point out that some kind of “design,” or “intelligence,” or “purpose,” or “organizing principle,” or “teleology” (for lack of better words) may be involved in the origin of life and even the cosmos, even if it’s mere speculation and therefore not in the realm of science proper.
And we shouldn’t forget the crucial role that semantics plays in choosing language to explain our povs. For instance, what is meant by the word “design”? There are “designs” everywhere in Nature. The human mind is clearly capable of design, and is itself the result of the “designs” of Nature (if not of “God”). Seems clear that we can all agree that the present state of things is the result of long processes of Nature, and no one can answer the question “what is the First Cause” of Nature itself (any answer to that question is not science but metaphysics or poetry at best and literalistic religious dogma at worst).
So ID can be valid philosophical inquiry or imaginative speculation or poetry, but it shouldn’t be taught as science.
Well, that may be a point he makes from time to time, but the general idea behind all of the logical hand waving in the ID camp is: “You can’t have a system with these X constraints that produces complexity” [whatever they mean by complexity]. When somebody builds a computer simulation with the constraints listed that generates complexity (as they seem to define it), the argument somehow becomes a softer, “Oh, but does nature really work that way?” or “That’s not *really* what we meant by complexity.”
Either they have a logical argument that’s based on meaningful definitions of complexity applied to a meaningful list of restrictions on the system producing that complexity, or they don’t. You can’t go from an abstract argument about something being impossible in principle and then, once the principle is demonstrated, whine that it hasn’t been demonstrated to work that way in practice. It’s just a lame trick to move the goalposts. I think that’s what most people here are complaining about.
So far as I can see, Dembski (and consequently Sal who parrots Dembski to the exclusion of rational thought) uses complexity solely to represent the inverse of probability of occurence – no matter what else he claims it represents. Of course, since Dembski has never managed to produce any meaningful calcuations of probability for anything, it’s hard to show a concrete example. And regrettably, they don’t have a meaningful argument.
Indeed, if the IDers stuck to criticising evolutionary models by saying “those models don’t exactly reflect what happens in nature”, that would be great, since one could simply there respond with “okay, IN WHAT WAY is nature different from the models?”, then if appropriate improve the models.
But that’s not the context under which the models come up, neither here nor in general; instead here we see the supposed imperfection of the models being brought up as last-ditch goalpost moving in a discussion about the supposed limits of “unintelligent” processes. In this context it doesn’t matter whether the models perfectly describe nature, since the models’ creators didn’t claim they do perfectly describe nature– instead the creationists claimed that certain aspects of the models are impossible, and the models themselves are being put forward as demonstrations that well more than that is possible.
In this context since it is the creationists who are making active claims about these crazy constraints that exist on mindless processes, they’re obviously not helping themselves by backing off into soliphism about whether or not the models match reality. We’re not discussing whether reality can be modeled, we’re discussing whether mindlessness can produce “complexity”– whatever that means today.
Meanwhile, of course, it’s just plain funny to see these people sometimes retreating into “ah-HA! but the models are more simple than real biological evolution!”– since a lot of the time these are the same people who elsewhere have been trying to pass off the (not only simplistic, but actually dishonest) tornado-in-a-junkyard model as being the same thing as biological evolution.
This post and your comment on the thread is spot on. I run away with Sal’s turing claims, but I think it is pretty clear that even if he would be right it doesn’t invalidate evolution by being an obstacle for life. Models for abiogenesis shows a stepwise accumulation of properties before the first “stand-alone” replicator occurred.
Speaking of IDiots obsessiveness about information, I find it interesting that some biologists looks at parallels between models of evolution and machine learning. Evolution picks up ‘information’ from the environment, at least in the form of which genes works or not.
“Right now Chris is trying to understand natural selection from an information-theoretic standpoint. At what rate is information passed from the environment to the genome by the process of natural selection? How do we define the concepts here precisely enough so we can actually measure this information flow?” ( http://golem.ph.utexas.edu/category/2006/12/back_from_nips_2006.html#c006690 )
One possible way is that population models of asexual organisms looks exactly like bayesian inference models used in machine learning. Each individual allele is a “hypothesis” which after selection improves the populations “theory” of the environment.
“IIRC, a Turing machine doesn’t need an infinite amount of memory at any given time; we just have to be able to add new memory as needed. At least, this is the way I remember Feynman setting up the problem in his Lectures on Computation. Probably not a point worth fussing over, I know. . . .”
Actually it is a point well worth fussing over as a Turing machine only needs a potentially infinite memory and the difference between potential infinity and real infinity is both philosophically and mathematically fundamental.
@Thony C.:
I recognize that the difference between “potential infinity and real infinity” is a big one. There is, indeed, a philosophical and mathematical gulf between “infinite at time t” and “finite but capable of growth at all times t“. It just seemed a little peripheral to this thread, that’s all.
🙂
Thony:
You’re correct – in fact, in most theoretical CS literature, we use the term “unbounded” storage, rather than “infinite” storage – the idea behind it is that a turing machine doesn’t need an *infinite* amount of storage to do a computation – it only needs a finite quantity, but the amount of space that it will need is uncomputable.
Sal, on Ed’s blog:
Is Dell PC a computer or does it contain a computer? It has parts in it that aren’t a computer, like a power supply, transformers, cooling elements, a casing, connection points, a sound card, etc. Chu’s quibbling could be applied to the dicussion of a Dell PC as well. It only highlights his eagerness to cloud the discussion with semantic quibbling….
The analogy is flawed. When discussing whether life is a Turing equivalent computing device, whether it contains parts that aren’t Turing-complete is just as irrelevant. Life is not a Turing machine, and trying to shoe-horn CS into an argument for ID is nonsense at best. That life is a Turing-machine or a computer is your strawman, not anyone’s argument.
Norm Breyfogle:
If the Universe is in fact only 10 to 20 billion years old, then it’s at least rationally possible ask if sheer chance accumulations of the building blocks of life resulting in the complexity of DNA and cellular structures might not have occured by sheer chance in the allowed time frame; some kind of teleological force might be at work to accomplish DNA in only 20 billion years.
It may be rationally possible to ask such a question, but it is not possible to rationally answer it. The problem is that such a probabilistic question is pathway-dependent. In other words, you cannot calculate the probability of a structure arising by chance based solely upon knowledge of the structure–you must enumerate every possible pathway and calculate likelihood for that pathway to determine a net probability.
Even worse, you cannot calculate the probability just for that structure. To do so is to fall into the error of post-hoc probability calculations. To see this, deal out a deck of cards. It is quite easy to calculate that the probability of that particular sequence of cards (1 in 10^68) is so low that the likelihood that anybody would ever deal out that particular sequence by chance is negligible. Yet you just did! The error, of course, is that you cannot consider just that hand, you have to consider the probabilities of all the other sequences that you might have gotten instead.
In the case of the origin of life, that means that you cannot consider just the probability of formation of DNA and cellular structures that we see around us. You must also enumerate and calculate the probability of every possible information-carrying molecule and cellular structure. Such a calculation is obviously far, far beyond our capabilities, either now or in the foreseeable future.
So while it is not exactly irrational to ask such a question, it is kind of stupid, because even a basic understanding of probability would tell you that it cannot be answered. Indeed, when I see this kind of question asked, I always have to wonder, “Is this person really so ignorant that he doesn’t realize that such a calculation cannot be done? Or is he perfectly aware of that fact, and is asking the question rhetorically, with conscious intent to deceive?”
tgibbs said “… is he perfectly aware of that fact, and is asking the question rhetorically, with conscious intent to deceive?”
I say that since Sal is an ID’er, your “conscious intent to deceive” is an appropriate response.
And I note that Sal, despite his bravado on Ed’s blog, has yet to show up here.
This is fairly typical Sal behavior – whenever a conversation gets uncomfortable (i.e. he is shown to be an idiot) he bails.
ATBC has a link to a simply marvelous thread on http://www.kfcs.org on mantle plumes in which Sal was utterly demolished by Joe Meert, and then fled the thread when it was pointed out that he had grossly misunderstood Walt Brown’s book.
The interesting part of that particular thread was Sal’s confession that he rejects the work and teaching of actual experts in a given field (e.g. Joe Meert) because they make him look like a fool. I thought it was the most telling analysis of Sal’s basic style I’ve come across.
I expect much the same will happen here: Sal will display a misuse of standard terminology; various attempts to bolster his fallacious position by article cites that don’t actually support his point; followed finally by an abandonment of the thread when he feels that we’ve made a sufficient fool of him.
I’ve got the wasabi-peanuts and my comfy pillow. I’m ready.
I agree, guys. I was using the question of cosmic teleology as a philosophical thought experiment to show that it leads to no conclusions, because even if we could prove that the existence of any part of the universe or even the entire universe couldn’t be attributed to sheer chance (a task beyond our abilities to do anything more than speculate about, I agree), that still wouldn’t prove that it’s guided by or created by an anthropomorphic designer. So what do Saland other IDists REALLY mean to refer to when they point to a “designer”?
If they werehonest with us (or with themselves?) they’d just come right out and admit that they’re pointing to some version of the fabled Judeo-Christian God and that they’re only using science as a springboard to push something that’s not science at all, namely, faith.
My point is that we can diffuse the IDists by pointing out their semantic sleight-of-hand: so the universe is a design (a pattern of energy), so what? We can’t derive an anthropomorhic God scientifically from that, can we?
Simple.
ss:
To be fair to Sal, the SB site has been down most of this afternoon, so it’s really not fair to accuse him of wimping
out just yet.
Odd. I’ve had no trouble. The only page that didn’t load for me was the 24-hour posts.
But while we’re here – what IS the point of claiming a problem with the natural origin of turing machines? I don’t see anything about them that precludes chemically constrained combinatorial mechanisms creating one.
So where’s Sal? Thought he was coming over for a good argument?. No, just an argument? A contradiction? A connected series of statements intended to establish a proposition? Oh, we’re talking about ID, my mistake.
http://www.mindspring.com/~mfpatton/argue.htm
In case Sal needed a refresher on arguing, since I flubbed my other URL.
…in only 20 billion years.
Wow. What what a way to make a lifetime seem really, really short!
(Theme song from Jeopardy plays softly in the background)
Sal has honesty issues..
http://www.scienceblogs.com/pharyngula/2006/12/more_creationist_ellipses.php
Bad, bad creobot.
Don’t be so sure of your characterization. Some of my responses will follow this post in the several minutes, and I intend to return, even if sporadically.
Mark displays quite a bit more intellect in a single post than the an entire month of puke that comes out of ATBC, that’s why I consider an exchange with him worthwhile, especially since people on my side might actually want to hear Mark’s first rate critique versus the usual belch that comes out of ATBC.
Sal
First to the point of semaintic quibbling of “contains a Turing Machine” versus “is a Turing machine”. Consider the Dell PC which is loosely equivalent to a Turing Machine. Is the Dell PC “a computer” or does it “contain a computer”? If it contains a computer, can one demarcate the physical components that define the computer? Well, not so easy. The epistemological question is not so easily resolved, and it would take quite a bit of rigor to even come up with decent way to make such demarcations, but the demarcations can only be resolved by appeals to subjective decisions any how about how to make such demarcations.
One could extend the same problem to a physical Turing machine? I mean, where do you say the physical Turing really exists? Do you exclude the power supplies, the supporting hardware that makes it possible, etc. So in otherwords, Marks quibble is just that. The distinction between “is a Turing Machine” and “contains a Turing machine” is on the order of asking whether a Dell PC “is a computer” or “contains a computer”.
With that, in my next post I’ll address the adequacy of describing life as a Turing complete.
Sal
So Mark uses the above arguments to try to persuade the Darwinists in the blogsphere that I made a stupid remark by likening life to Turing Machines?
I cited Trevors and Abel’s peer-reviewed paper in Cell Biology that said a major OOL problem was the Turing Machines of life.
Here from Yockey’s Information Theory, Evolution, and the Origin of Life:
If Mark peruses the literature out there, one hardly find’s Yockey’s claim unique. Hofstadter 26 years ago had a similar view. And I pointed a paper from the IEEE (on molecular Algebra) on Ed’s weblog that had a similar view. Although I would argue that some of the descriptions by Yockey of the isomorphism are a bit too simplistic in the post-genomic era, it is not a stretch to say life at least approximates significant facest self-replicating Turing machine.
Whether it is exactly that or not does not negate the difficulty of forming life which has many if not all the characteristics of a Turing mahcine. At the very least, it would seem that merely because I related life to a Turing machine versus some like Trevors, Yockey, or Hofstadter, it’s automatically labeled stupidity….
Sternberg gives a more modern view of informaiton representation in his peer-reviewed article on the deep recursivity in biology. That would not negate Yockey’s fundamental claim, but only make it more difficult for OOL proponents to explain how such deeply recursive structures formed out of noisy stochastic process like a primordial soup.
Sal
How did you come to conclude that? That sounds like recycled Shallit (Dembski’s teacher), Elsberry, Perakh. That is not accurate.
Maybe I’ll get to that next Tuesday when I return from out of town.
It’s disappointing someone of your intelligence and learning would ascent to the distortions and illogic of evolutionary biology. Some of these evolutionary biologist will say evolutionary biology doesn’t apply to OOL. Well in that case, neither has it solved the problem nor completely discredited the supposedly repackged creationist arguments against OOL.
I don’t intend to ignore the other stuff you wrote on GA’s either. Nor a post from this past fall on my claims about noise, information science, specified complexity and reductive evoltution.
regards,
Sal
The fact you even ask such questions indicate to me that you do not actually know what a turing machine is.
The fact that you continue talking about Dell PCs instead of actually supporting your comments indicates to me that you know that you don’t have a point here, and your only hope to win your little PR victory in this case is to relentlessly distract by talking about things that don’t matter or relate at all– like Dell PCs.
The quote from the Yockey book you cite is:
The logic of Turing machines has an isomorphism with the logic of the genetic information system.
I am not familiar with Yockey and I haven’t read this book. If Yockey can actually outline the isomorphism he alludes to here and back up his statements, this certainly looks like a reasonable statement for Yockey to have made. But this is not the same thing as “a self-replicating turing machine” being a part of life. Do you know what an isomorphism is?
You did not “liken” life to Turing machines. You said a “self-replicating Turing machine” is a “fundamental component” of life. This might not exactly be a stupid comment, but it is certainly an unproven and unsupported one as stated there. Also on Ed’s blog you made this comment:
This comment really is stupid.
Sal Sez:
“Mark displays quite a bit more intellect in a single post than the an entire month of puke that comes out of ATBC, that’s why I consider an exchange with him worthwhile, especially since people on my side might actually want to hear Mark’s first rate critique versus the usual belch that comes out of ATBC.”
Ahh, but if it was FARTING then it would be another sciencetasic triumph for ID. It’s getting harder to parody you creobots every day.
Sal:
As Coin noted, you don’t know what a turing capable system is. In a computer it is both the (roughly) von Neumann architecture, abstracted from the physical layer, and the usually turing capable languages used.
Mark’s quibble, as you call it, is that it isn’t demonstrated that life or cell processes are isomorphic to such turing capable systems. To just assume so because it looks like algorithmic processes is equivalent to the assumption of design. Ie worthless until proven.
Nitpick: in this context it is more precise to say abiogenesis instead of the larger origin of life.
Abiogenesis isn’t a problem for evolution, which stands as a separate theory. Whether cells (or brains, regarding Voie) are turing capable or not, this isn’t a problem for abiogenesis (or neuroscience). Models for abiogenesis (or brains) shows a stepwise accumulation of properties before the first “stand-alone” replicator (mind) occurred.
This is very like the situation with Behe’s IC, where in fact interlocked systems isn’t a problem (but even a prediction of evolution). ID is repeating history – instead of making a theory and do positive predictions, you are trying out negative claims. Even with success it will not make ID true.
There is nothing in the Darwinian model that ever had anything to do with creative evolution, absolutely nothing. Allelic mutation, natural selection, sexual reproduction, population genetics, none of these have ever played any role in the emergence of species or any of the higher taxonomic categories. Like ontogeny, phylogeny resulted from the controlled derepression of purely internal “prescribed” developmental patterns. The sole role of the environment was (past tense) to act as a releaser and to provide the milieu for the survival of these preformed patterns (species).
Furthermore, progressive evolution is a phenomenon of the remote past and it is questionable if even a new species can any longer emerge. All that we see is extinction without a documented new genus in 2 millon years and no new true species in historical times.
The entire Darwinian paradigm is a fiction dreamed up by a pair of Victorian naturalists, one of whom, Alfred Russel Wallace had the good sense to abandon later in life. It is the longest lasting hoax in the history of science, dwarfing both the Phlogiston of Chemistry and the Ether of Physics. Ether, Selection, Phlogiston, ESP, Extra Sensory Perception indeed!
It is hard to believe isn’t it?
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
What “deep recursivity”? Life and cells has plenty of feedback and feedforward structures, which is essentially the same prediction as interlocking.
“These data suggest that the constructed (proteins, chromatin arrays, and metabolic pathways) has an important role in shaping the descriptor. Insofar as it is biochemically possible for states adopted by cellular structures to be stabilized and eventually memorized by engineering chromosomes, semantic closure can be transcended-meaning can be transferred from the domain of form to the genome, and this presumably ongoing process is termed teleomorphic recursivity.” ( Sternberg, “Genomes and Form: The Case for Teleomorphic Recursivity”)
Sternberg’s paper is published in the Annals of the New York Academy of Sciences. It publishes conference proceedings, so it is not a peer-reviewed paper. Proceedings publish notes that may or may not been eviscerated at the conference. If such notes are to be referenced as peer-reviewed, they are submitted to magazines who does that.
He has only put a fancy name on the complex of evolved metabolic pathways. It is also a dubious name, since in biology it already denotes a fungus in a sexually reproducing mode. ( http://en.wikipedia.org/wiki/Teleomorph%2C_anamorph_and_holomorph )
In any case, since metabolic pathways evolves, it shows that building complexity and feedback is no inherent problem for abiogenesis either.
John A. Davison:
You have your own soapbox here: http://www.pandasthumb.org/archives/2005/05/davisons_soapbo.html .
New species in historical time: http://www.talkorigins.org/indexcc/CB/CB910.html .
What the heck do you think the C in “PC” stands for? Comparing computers to Turing machines is a bad analogy to begin with, but to use this particular analogy is just stupid.
Translation: Life is like a box of Turing machines…
(sorry, had to do it)
I boggle at the mind of someone who can view either phlogsiton or the ether as ‘hoaxes’. Both were fairly reasonable hypotheses, appealing to our intuitions, that were shown to be inconsistent with the available data and discarded.
This is very like the situation with Behe’s IC, where in fact interlocked systems isn’t a problem (but even a prediction of evolution). ID is repeating history – instead of making a theory and do positive predictions, you are trying out negative claims. Even with success it will not make ID true.
This is the key point: Sal (like Dembski, etc.) presents a fundamentally false dichotomy: if no known evolutionary of chemical mechanism, or some combination of such mechanisms can produce X, then X must be designed. No matter how many times this is pointed out to them, they fail to recognize that “unknown” or “yet to be discovered” are even to be considered possibilities. Add to this the fact that they fail to ever provide any calculations (and in Sal’s case, any actual research) to support their claims, and you have a “perfect storm” of scientific illiteracy.
It’s the point that Aquinas made long ago: the theist knows intuitively that you are wrong; his is not a position based on observation or reason. Since he knows you are wrong, there must be something wrong with your argument. Must be. So no matter how meaningless the objection, no matter how irrational the counter-contention (we can’t even dignify the ID position as an ‘argument’ really) it sufficies for the theist.
“Their arguments, not being founded in reason, cannot be swayed by reason.”
Sal said:
“Don’t be so sure of your characterization. Some of my responses will follow this post in the several minutes, and I intend to return, even if sporadically.
Sal”
Sorry, Sal, but this is quite simply not true. You abandon every single thread that becomes uncomfortable for you. Anyone who is interested can look at your track record on http://www.kcfs.org, for example, where you have a large number of posts where you either post a factually incorrect claim and then fail to support it, or (as in the case of the mantle plumes) you abandon your argument when it is demonstrated that you are behaving like an idiot.
You have ZERO credibility in the blogsphere for reliability, understanding of topic, ability to stick to an argument, or even common courtesy. That is why I predict nothing of substance from you on this thread (I note that coin and Torbjörn Larsson have already demolished those parts of your contention that Mark didn’t already get to. Perhaps you should answer the new questions posed to you? That would be showing that you were at least thinking of taking this discussion seriously.
Frankly, John Davison does a better job of rational discussion than you do.
Sal said,
“It’s disappointing someone of your intelligence and learning would ascent to the distortions and illogic of evolutionary biology.”
He is merely pointing out your errors. It is quite easily shown that you remain ignorant of evolutionary biology – that’s why you have to keep resorting to inappropriate analogies (i.e. turing machines) to try to make some point.
Sal says,
“Some of these evolutionary biologist will say evolutionary biology doesn’t apply to OOL. Well in that case, neither has it solved the problem nor completely discredited the supposedly repackged creationist arguments against OOL.”
This is logically incoherent. No one has asked evolutionary biology to solve the problem of abiogenesis, that’s another set of theories and hypotheses. Your statement is precisely equivalent to “Some of these evolutionary biologist will say evolutionary biology doesn’t apply to string theory. Well in that case, neither has it solved the problem nor completely discredited the supposedly repackged creationist arguments against string theory.”
What is most disturbing about this is that it has nothing to do with science; this has to do with your basic inability to apply logic and intelligent demarcation. As I pointed out before, you seem fundamentally confused about such basic terms as ‘hypothesis’, ‘theory’, ‘conjecture’, etc.
Perhaps you should explain why the theory of evolution is not considered a failure by creationists (and it’s nice of you to continue to admit that this has nothing to do with science, and that ID is all about the religion) because it fails to explain the nature of the curvature of space-time.
Sal says,
“I don’t intend to ignore the other stuff you wrote on GA’s either. Nor a post from this past fall on my claims about noise, information science, specified complexity and reductive evoltution”
Then show it. Address them. As it stands, you’re doing exactly what I predicted. Soon you will abandon this thread when you feel is makes you look to idiotic.
Then I get to do my victory dance.
There is not a shred of evidence that any metabolic pathway ever “evolved.” The same can be said for the subcellular organelles. They all first appeared full blown in their present form. There were no “precursors” for either metabolic pathways or intracellar organelles. To continue mindlessly to assume such nonsense is without foundation.
“We might as well stop looking for the missing links. They never existed.”
“Otto Schindewolf
The first bird hatched from a reptilian egg.”
ibid
Incidentally, I was banned long ago from Panda’s Pathetic Pollex, Pharyngula, ARN, EvC and twice from Uncommon Descent, compliments of David Springer, the biggest and most cowardly compulsive bully in the history of the internet.
Who or what is next?
I love it so!
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
When Sal Cordova says “the epistemological question is not so easily resolved”, I am reminded of Deepak Chopra’s complaint, “How in the world do our thoughts manage to move the molecules in our brain?” PZ Myers described this situation eloquently: “It’s a classic example of being stumped entirely because you’ve phrased the question in an invalid way.“
I regret doing this, because there is always the chance that I am losing important information, but I just added John A. Davison to my ScienceBlogs killfile. If other people also find that the chance of useful interaction has dropped to a level only measurable by surreal infinitesimals, and if you also run Firefox with Greasemonkey, you’re free to do the same.
Sal quoting Yockey’s Information Theory, Evolution, and the Origin of Life:
That’s not a Turing machine. A Turing machine doesn’t have an input and an output tape; it has one tape it can both read from and write to, and serves both for input and output, and crucially, can serve to store state during a computation. Now it would appear that cells can write to DNA in the form of methylation, but the analogy to the tape of a Turing machine is decidedly strained.
Perhaps you should learn what a Turing machine is?
Sal:
You’re missing the entire point of my argument, instead focusing on a bogus metaphor.
One of the fascinating and counter-intuitive things about computation is just *how trivial* it is. If you peruse the archives of my pathological programming languages topic, you can see that it’s *very* easy to create a Turing complete computing system – in fact, it turns out to be fairly difficult to construct a computing system that can do much of anything meaningful that *isn’t* Turing complete.
So the fundamental question that I’m trying to get at is: is DNA capable of turing equivalent computation because life *requires* a Turing-equivalent computing system at its core? Or is it simply the fact that computational systems naturally tend to be TE unless they’re deliberately constrained? (I’m going to promote this into a top level post, because it’s an interesting subject.)
Everyone:
Please tone down the personal insults. That’s the one thing that I really don’t want to tolerate in comments on my blog. If you can’t comment without calling other people cowards, liars, or scum; or you can’t address other people comments without referring to what they have to say as “puke”, then please simple refrain from commenting. We’re all adults here, we should be capable of being *civil* to one another even when we disagree.
Self-replication is not an inherent property of Turing machines, although presumably a Turing machine, running an appropriate program, and coupled to appropriate machinery, could be self-replicating. I don’t think that such a thing as a “simplest self-replicating Turing machine” has been defined, but never mind.
There is a huge gulf between speculating that the genetic control system used by modern organisms might be turing complete (which is plausible at the level of speculation, but to my knowledge has not been demonstrated) and asserting that turing completeness must be a property of the earliest forms of life. Logically, it does not make much sense. All that is required for natural selection to take hold is a system that replicates and that displays inheritance such that the inherited properties influence replicative success. This certainly does not imply turing-completeness. It is plausible that such a system, subjected to natural selection under appropriate conditions, might eventually evolve into something that is turing-complete, but once again this would have to be demonstrated.
Andrew Wade makes an interesting point about input and output tapes. At the very least, this means we have to talk in terms of multi-tape Turing machines, specifically a k-string Turing machine with the transition function restricted so that (a) the input tape cannot be modified and (b) the output head can only move in one direction. Wikipedia’s article on Turing machine equivalents quotes Charles Papadimitriou’s Computational Complexity (1993) to the effect that this does not give you any fundamentally new capabilities.
The analogy is strained, but not fatally so; the more pressing point is that it is irrelevant. What extra predictions does this CS-inspired model of protein synthesis let us make? Does it give us more back than we put in — i.e., can we say anything with this description which we didn’t already have written in a biochemistry book? If not, then it is at best a descriptive analogy useful for teaching, but not a scientific theory. Show us the predictions! They may be wrong, but even rotten fruit proves the tree is alive.
Furthermore, there is a big difference between garden-variety Turing machines and a Universal Turing Machine. A UTM is a member of the big, happy Turing family, and it is built from the same conceptual parts as all the others, but it has the special ability to mimic machines given descriptions of them. What in the workings of DNA and RNA resemble this remarkable faculty? The cooperative collection of molecules — including RNA polymerase, tRNA, mRNA and ribosomes — serves essentially to translate information from one format (DNA) to another (protein). It is a biochemical implementation of a look-up table known as the genetic code. One could perhaps say that the “read head” of our machine scans a sequence of three nucleotides, “goes into a state” and consequently writes an amino acid to the “output tape”. But this is not the behavior of a UTM!
(Incidentally, I find the labeling of “tRNA, mRNA, synthetases and other factors” as “internal states” to be deceptive and misleading. A state is a configuration of the system: this tRNA docked on the ribosome, this enzyme folded in a certain configuration, and so forth. A tRNA molecule is not a state any more than a RAM chip is a value in memory.)
If there is any validity to the description of DNA as a Turing machine, then our modern knowledge of biochemistry tells you that a Turing machine can evolve from pre-biotic components. We don’t know all the steps in this process, but we know an awful lot about it, and we have never found a “smoking gun” indicating that something other than regular organic chemistry was at work.
Maybe it sounds like recycled Shallit because Shallit is right?
I know about information theory mainly because I know Greg Chaitin, and I’ve attended a bunch of his talks, which got me interested enough to buy a bunch of his books (I did get a touch of Kolmogorov in grad school, but it was a very brief introduction in the context of computing lower bounds of algorithmic complexity.). I learned a bit of Shannon theory later because I was fascinated by K-C theory.
To anyone who’s genuinely studied information theory, the difference between K-C and Shannon is very clear – and it’s also clear that Shannon theory is simply *not* appropriate for discussions of things like DNA. Shannon is primarily focused on information-as-message; while it does discuss other things, message capacity and channel capacity are very much the focus of Shannon theory. Kolmogorov-Chaitin is focused on information in terms of computation, independent of anything involving communication or transmission of data. In terms of cell biology and information in DNA, the fundamental questions being pursued by computational biology are very much questions of computation, not of message transmission.
Shallit is simply another math guy who has actually studied
information theory.
And let me point out here that you while you take the time to write a comment about this, that comment is absolutely content free. You take the time to say that I’m wrong and mischaracterizing your argument about information theory, but you don’t write a single word saying *why* I’m wrong.
Again, I can translate this down to “You’re wrong, but I won’t say why”.
“You’re wrong, but I won’t say why” is a close cousin to the Courtier’s Reply. Typically, the Courtier drops names and titles of books, the dustier the better (St. Anselm’s Index of Imperial Fabrics would do nicely), but does not explain how the arguments in those books relate to the matter in hand. In order to appreciate the weave of the Emperor’s stockings, we must spend our years of penance in the cold cloisters of the Royal Silk-weavers Academy.
By the way, congratulations to MarkCC for breaking into the “Top Five/Most Active” sidebar! Right now, you’re even leading Orac. Best of all, you can now benefit from the feedback loop of blogotubic popularity. 🙂
Since I had many face-to-face conversations with Claude Shannon (who invented Channel Capacity), and took courses [such as Error-detecting Error-correcting] Coding Theory while an undergrads at Caltech; and since I was a protege of and coauthor with Nobel Laureate Richard Feynman, who invented nanotechnology in the 1950s, I did indeed pursue the very questions that Sal denies were pursued: “concepts of information storage capacity and channel capacity…. evolutionary biologists we get indications they have hardly looked into the issue…”
In graduate school 1973-1977 I calculated channel capacity of metabolisms by simulation, differential equations, Laplace transforms, and deeper math such as Semigroup Theory.
A yeast cell or bacterium that reproduces in half an hour has 10 times as much channel capacity in proteins than in DNA/RNA. If you look at what it takes to make the right sequence of amino acids, allow the protein to fold from primary to secondary to tertiary structure, orient the molecules, and stick them where needed on membranes, it takes on the order of 10^9 = 1,000,000,000 bits per second per cell.
The channel capacity is more subtle, as the mechanism I analyzed for information transmission in metabolisms was actually waves in chemical phase space. The proteins evolve, not to maximize Km (Michaelis contsant) or Vmax (as older texts since 1930s said) but to optimize the lumped coefficients of the metabolism at each step of the metabolism, corresponding to eigenvalues of eigenfunction in the space of differential operators on the nonlinear differential equations.
Many, many people jave done similar things since then, and a handful at about the same time as me. So the things that Sal denies were investigated, were investigated in vivo, in vitro, in silico, and in axiomatic math over 30 years ago. Prigogine rediscovered some of my results, after me, before his Nobel.
Of course, one has to know SOME science and Math to read any of the thousands of relevant papers.
I am willing to point Mark C.-C. to my refereed papers, but not to bother people with them when those people show an utter misunderstanding of the very foundations of the field.
Oh, and I “beta-tested” the first great book on GA: “Adaptation in Natural and Artificial Systems” by John Holland, 1976. I was the first to evolve equations that solved previously unsolved proplems in the science literature, without coding the answers in — as neither I nor anyone knew the answers,. I presented results at poster sessions of the Artificial Life conferences. There were lots of smart people there, and none of them needed to fall back on Creationism. Likewise, the NKS Wolfram conferences in the current day. Guess what? GA systems, and wildly different systems (such as Mark’s example of Life in the J. H. Conway sense) — these WORK. No need to explain why digital systems simulating life-like computation “at the edge of chaos” resembleLife on the hoof. The papers and programs that get results are naturally (artificially) selected by referees and granting agencies.
John Davidson:
How about some *evidence* for any of that? That’s an argument that I’ve heard numerous times before, but I’ve yet to see anyone actually *defend it* by showing actual evidence.
It’s easy to talk and claim something like that, but it’s a whole lot harder to actual turn that claim into a real scientific argument.
Just for example: there are a number of places where we’ve observed new genes, or new mutations of old genes – that is,
information or structure in the genes that is demonstrably
new.
For example, we’ve seen bacteria formed from a single clone
line develop penicillin resistance by way of a modified
cell-wall production pathway; that capability was *not* in the original genes of the bacteria that formed the cell line; but it wound up being produced after prolonged exposure to penicillin with clavulanic acid. (Clavulanic acid blocks the actions of penicillinase, which is the common mode of penicillin resistance.)
Studies of the resultant line of resistant bacteria show modifications of the gene that codes for the production
of the cell-wall component normally interfered with by
penicillin. That is, we can sequence the specific genes of a normal bacterial strain, and the same genes in a resistant
strain, and identify the differences. That difference was *not* originally in the gene; and it is not the case that
an old gene was switched off and a new one switched on – it’s the same gene, but modified. How can you explain that in terms of your theory?
I hereby make a falsifiable (perhaps self-falsifying?) prediction.
Davison’s “explanation” of MarkCC’s example will involve dodging the facts and denying their significance rather than replying to the data. It may involve a canard of the “but they’re still just bacteria” type. Finally, it may not be expressed in a civil manner.
There. Now falsify me.
Can we just take Sal out and shoot him, and spare him further misery and indignity? Wouldn’t it be the right thing to do? My intuition says Yes!
BTW – This post is a keeper, and thank you Jonathon Vos Post for your input. If ONLY there were a way to post this, in it’s entirety, on Uncommonly Dense, without banning by DaveScott or Dembski. If their really were an Intelligent Designer, it would be there already. My intuition tells me so.
Maybe it sounds like recycled Shallit because Shallit is right?
Sometimes I think that the saddest thing about Dembski chickening out at Dover is that it meant Jeffrey Shallit never got to take the stand.
I’m pretty sure there are laws against that.
Can we just take Sal out and shoot him
…
I’m pretty sure there are laws against that.
Posted by: Mustafa Mond, FCD
Not to worry, I’ve seen no evidence anywhere at anytime that anything can penetrate the invisible Field of Obtuseness and Obfuscation (FOO) that surrounds and protects him and binds the Creationists/IDers. A mysteriously strong yet weak force.
I’d just like to interject and suggest that posts that give Mr. Cordova the opportunity to walk away and say that it was because people were being rude and not engaging him are probably a bad idea.
Blake Stacey,
…
I was unaware of this result, but does this not imply the presence of at least one tape that can be both read from and written to (and along which the head can move both directions)? Otherwise you’re proving the machine is as powerful as a 0-string “Turing machine”, and that’s not terribly exciting.
I think I’m getting a bit tangled up in the terminology; the DNA->mRNA->ribosome->pepide string pathway could certainly be considered analogous to a particular, specific, Turing machine. But it’s not a framework for making Turing machines in general; that is my point. Which I think is part of what you’re saying.
Now I do find it plausible that cells have the parts to build arbitrary computation devices–as per Mark C. Chu-Carroll’s comment that computational systems naturally tend to be T.E.[1]. But I am sceptical that they will function much like Turing machines.
[1] Memory limits aside. Not that cells are going to have 32 bit memory busses or anything like that, but they’re going to have limits of their own on how many resources they can devote to computation.
Second and last warning:
I do not want to see comments containing personal insults or threats on this blog. I said it before, and I’m
repeating it now. We’re all supposedly adults here; we can
debate in a civil manner. If you can’t be civil, don’t comment. Any more insults, and I’ll have to start deleting comments, which I really don’t want to do.
Andrew Wade,
Wikipedia sez that a k-string Turing machine with input and output is “the same as an ordinary k-string Turing machine, except that the transition function δ is restricted so that the input tape can never be changed, and so that the output head can never move left.” I know, I know, quoting Wikipedia means you’re living in a state of sin, but this is pretty far from my day-to-day work, and I haven’t studied abstract models of computation in quite a while.
Yes.
I am obviously pearl casting here and evoking the usual snot much to my delight. If anyone is interested in my views, I am still posting at ISCID’s “brainstorms, “Telic Thoughts” and alanfox.blogspot.com/
A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
AndyS wrote:
…in only 20 billion years.
Wow. What what a way to make a lifetime seem really, really short!
Point taken. Consider the modifier “only” removed.
Maybe removing “stupidity” from the thread title would be a good place to start.
John:
I responded to your comment in a completely civil, respectful manner. To simply throw out your argument, ignore responses, insult us, and then run back to your safe haven where anyone who disagrees is banned is simply cowardly. Are you unwilling to actually engage in an uncontrolled, uncensored discussion of your theory in an open forum?
Mustafa:
My policy around here, as I’ve tried to explain before, is that I hold the comments to a different standard than the posts. The posts are intended to stimulate discussions; being a bit over-the-top can be a good thing in encouraging people to read the post and join in a discussion. But once a discussion is going on, then being abusive will just drive people away.
If I’d known Sal was actually going to show up to participate in the comments, I probably wouldn’t have picked that title for the post; but changing it now seems like it would be dishonest – like I was trying to hide the fact that I used an insulting title for the post.
I am still posting at ISCID’s “brainstorms, “Telic Thoughts” and alanfox.blogspot.com/
I must admit to slightly wondering how long until JAD wears out his welcome at Telic Thoughts as well…
I’m delurking to say that I think Jonathan Vos Post’s papers deserve their own thread! Not in response to Sal, I just want to read more about his and Feynman’s work.
Scarlet said:
“You have ZERO credibility in the blogsphere for reliability, understanding of topic…”
**************************
Hey Scarlet, leave me out of this. I have
enough problems now.
Zero
http://www.antievolution.org/cgi-bin/ikonboard/ikonboard.cgi?s=45631691a20e2aec;act=ST;f=14;t=3399;st=180
Blake Stacey,
Ah. That must mean “the same as” as in “the same in mechanism as” or “like”, rather than “as powerful as”. Prop 2.2, which is about the “power” of such a Turing machine has “+ 2” term in it which I find significant.
I must admit this is fairly far from my expertise. But I do have other reasons for how I interpret that confusing paragraph of Wikipedia. For instance, I do not see how a 2-string Turing machine with input and output would be able to count the symbols before the first blank in an input string, and output that count in binary format. For where would it keep the running count? Not in the output string (or its head position): once written a partial count is forever inaccessible to the machine (as too is information on the head position) and cannot form the basis for further counting. Not in the internal state: as that would place a finite limit on how high the machine could count. Not in the input string, which is read-only. Which leaves only the input head position, and the machine can’t know how far it’s moved that head without counting.
A 3-string Turing machine with input and output could keep the running count on the third tape, and of course a normal 1-string Turing machine could also perform this counting task.
If Mr. Vos Post would be willing to post references to or information from some of the papers he refers to, I’d actually be extremely curious to read them as well.
I’m curious specifically about how one goes about calculating channel capacity for an “unusual” information source– being able to look over an example of such a calculation done for a biological system would be really neat.
John:
You did not adress the Talk Origin reference. They discuss (with references) new species in historical times. So how can you continue insist there is no evidence?
There are literary thousands of papers that describes, predicts and confirms evolution in pathways. Here is a cool example of evolution of (interlocked, btw) hormone-receptor metabolic pathways: http://www.pandasthumb.org/archives/2006/04/evolution_of_ic.html .
Not so. For example, since ribosome RNA is both a preserved and essential element, it is likely a molecular fossil. It is also a basis for the prediction that RNA came before proteins (RNA worlds).
Likewise IIRC biologists have found mitochondrial genes that have migrated and become integrated in the genome of the nucleus. They have double membranes and their own ribosomes, with bacterial RNA. AFAIK the evidence for mitochondries as exaptated organisms is pretty firm.
Other predictions such as that the nucleus and some other organelles (with their own remaining DNA) are incorporated organisms are pretty good too. See wikipedia, or here http://web.mit.edu/esgbio/www/cb/org/organelles.html .
There isn’t any assumption, this is science making hypotheses that can be tested.
John:
I forgot:
Sorry, I didn’t know that. (But I’m not surprised. 😉
Well, what about your own blogs then? If you don’t answer any arguments (see my previous comment on your claim of no new species in historical times), you are merely trolling.
I have already plenty of egg on my face for confusing my private speculations of possibility for turing capability with evidence for existence of something isomorphic to turing complete computations. But I can’t help but find it a fascinating idea as another aspect of nature.
If systems of life is isomorphic with algorithmic computation it would be tempting to think that those systems tend to evolve to be more capable and eventually turingequivalent, see Mark’s comments. And consequently they would be more or less automatically as capable as possible in the algorithmic sense, if they need to be.
Examples could be cell metabolism (perhaps in building new pathways), genome and phenome adaptation (perhaps in variation and selection), or brain function (perhaps in signal processing). Feedback and selfreplication as forms of selfreference may or may not be part of some examples, but at least for me it isn’t clear if that is enough to constitute recursion.
Perhaps one can make specific predictions if one looks at TE specifically, which would make it a part of a description. Perhaps not.
Should have read further. Davison is a troll, and should not be answered, because he thrives on it and seeks no serious conversation. Got it.
Oh, and when I said troll it was as a web classification, not intended as an insult. (But which of course it also is. Sorry about that.)
Torbjorn Larsson, whoever that is.
I am used to not being answered as that is the perfect demonstration of intellectual cowardice. Thanks for exposing yourself.
Separate species are established when their experimentally demonstrated hybrid exhibits sterility and when their origin can be explained. Discovery of a new life form is in no sense a proof of its recent origin, especially when its ancestor has not been identified.
Incidentally, if I am a troll, I am at least a published troll. Where may I find the evolutionary papers of my accusers; in the Journal of Negative Results?
It is only the MECHANISM of a long past evolution that remains in doubt, exactly as my signature suggests.
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
So, John, whatever the mechanism of past evolution (let’s leave that to the side), according to you it has either stopped working in the present or the sampling period of recent science is too short to perceive it working in the present. Which is it?
John:
First: as I keep saying, we should be capable of behaving like civil adults here. I’m giving you extra room, just to prevent you from being able to run away saying you were banned – but cut the personal insults.
Second: It’s quite interesting to see your definition of
cowardice, when your own behavior so perfectly matches your own accusation against someone else. You’re calling another commenter here a coward for purportedly not answering you, while you rather pointedly refuse to answer polite, legitimate questions about your own theory.
How about answering at least *some* of the legitimate questions put to you with some evidence, instead of just shouting more unsupported assertions?
For example, how about my question about cell-wall pathway modification for α-lactam antibiotic resistance in a bacterial clone line?
Or how about Torbjorn’s question about mitochondrial genes?
Furthermore, the notion of an incremental origin of any living system is absurd. So too is the idea that any intracellular organelles ever had precursors. Such childish ideas originate only in “prescribed” severely handicapped minds.
There is every reason to believe that progressive phylogeny is a phenomenon of the distant past and absolutely no evidence that it is still in progress. As for the support for a determined evolution, I have summarized that evidence, both indirect and direct in my 2005 paper “A Prescribed Evolutionary Hypothesis.” As for the cytogenetic mechanism for the release of that “prescribed” information, I presented that hypothesis twenty-seven years ago in my 1984 paper – “Semi-meiosis As An Evolutionary Mechanism,” a paper which has been ignored by the chance-intoxicated, mutation-happy Darwinian establishment. I stand behind every paper I have ever published since my first one in 1954 and not a word I have ever published has been either challenged or demonstrated to be in error in any way in the refereed scientific literature. It is only on pathetic internet blogs like this one that I am subject to the sort of abuse that betrays cowardly hamstrung ideologues completely out of touch with the reality as revealed by the experimental laboratory and the undeniable testimoony of the fossil record.
I wouldn’t have it any other way.
I love it so!
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
Pardon my typo. 1984 was twenty-three years ago, not twenty-seven.
John:
First of all, scientists engaged in a discussion generally present evidence rather than just shouting about how “absurd” everyone is.
Second, if you want to use a reference for evidence, it’s generally considered appropriate to actually provide a complete citation to allow people to locate a copy of the
alleged evidence. “My 1984 paper” is not enough information for me to track down a copy.
Third, if it’s worth your time to *keep* coming back here to insult people, why is it not worth your time to actually *present your argument*? You haven’t actually presented anything here other than bald assertions; every question put to you, you’ve either ignored, insulted the questioner, or blustered past without answering.
Pick any one of the reasonable, politely phrased questions that have been put to you here, and actually *respond to it*. Otherwise, you’re making yourself look like the coward you accuse others of being.
Mark, while I voted for you every day for best science blog (the fix was in, I’m sure that the explanation for your not entirely complete victory – curse the astronomically inclined Cephalopods!), I didn’t run across the disctinction you make between posts and comments. My bad, and ’twill not happen again.
But to that point, and in light of the immediate responses of both John and Sal, I was wondering if we could ask them both (and particularly Sal) to confine comments to actual discussion rather than simply citation. Sal does have a strongly established reputation of failing to support his arguments by any original discussion and relying primarily on citations of articles; John appears to rely primarily on invective (hence his regrettable history of being banned).
Sal, can you abide by that? Original commentary and your own words explaining points rather than simply cutting and pasting some other article? Otherwise it will make it hard to have a serious conversation on this point.
Huh? I answered you, and asked why you didn’t react.
Again, you claimed that evolution isn’t ongoing. Evolution predicts new species, and I offered a site with examples and references to new species in historical time.
There are many definitions of species concepts considered within evolution. Wilkins lists 26. ( http://scienceblogs.com/evolvingthoughts/2006/10/a_list_of_26_species_concepts_1.php )
“Different definitions of species serve different purposes. Species concepts are used both as taxonomic units, for identification and classification, and as theoretical concepts, for modeling and explaining. There is a great deal of overlap between the two purposes, but a definition that serves one is not necessarily the best for the other. Furthermore, there are practical considerations that call for different species criteria as well. Species definitions applied to fossils, for example, cannot be based on genetics or behavior because those traits do not fossilize.
Species are expected often to have fuzzy and imprecise boundaries because evolution is ongoing. Some species are in the process of forming; others are recently formed and still difficult to interpret.” ( http://www.talkorigins.org/indexcc/CB/CB801.html )
So I don’t need to accept your definition, which you propose because you want to dismiss evolutionary results.
But it confirms evolution, being predicted by it.
Many of the listed examples of observed speciation has identified ancestors.
If you are blogging on a math blog, you should not assume this. And btw your own papers critiquing evolution are published in the infamous “Rivista di Biologia”, so you can’t say that you have any peer-reviewed papers on the matter either.
No, this is from professional biologists. (References in the links given.) I assume you don’t know who they are.
Scarlet,
I note your cut-and-paste policy now. Umm, I am in a hurry so that will be my excuse this time. 🙂
John Davison said: …a paper which has been ignored by the chance-intoxicated, mutation-happy Darwinian establishment
There’s the telltale sign of quackery: when losing in the marketplace of ideas, invent a conspiracy to explain it. Never question your own views. It’s the philosophical equivalent of a fan who claims his team is the best, and only loses when the referees cheat for the other side.
Your car is ready sir. Hutchison, Tesla, Hahnemann, Baker Eddy and Lyndon LaRouche await you at the ball.
Since Kristine asked, here’s an excerpt from a paper intended for delivery 2006 in Otranto, Italy, except that the referees agreed that it was “out of scope” as a personal narrative. The full papewr has a lengthy list of references, including to a subset of my research papers.
The full paper summarizes how my discussions with Feynman, about what became Nanotechnology, led to me to my dissertation in which I applied the Krohn-Rhodes theorem to the semigroup of differential operators of a classical Enzyme Kinetics set of differential
equations, with good results.
Except that only a handful of biologists knew that level of math, and none of them were both on my PhD committee and favorably disposed.
==========
Picosecond to Lifetime to Gigayear and Single Molecule
to Organism to Ecosphere in Computational Enzyme
Kinetics and Proteomics
by
Jonathan Vos Post
[address and phone deleted here]
[Draft 3.0; 26 pages; 12,100 words; 23 April 2006]
INTERNATIONAL CONFERENCE ON COMPUTATIONAL METHODS IN
SYSTEMS BIOLOGY
18-19 October 2006
The Microsoft Research – University of Trento Centre
for Computational and Systems Biology TRENTO – ITALY
http://www.msr-unitn.unitn.it/events/cmsb06.php
ABSTRACT:
My research goal since 1973 has been a computational theory of protein dynamics and evolution which unifies mechanisms from picosecond through organism lifetime through evolutionary time scales. This unification effort included my Ph.D. dissertation work at the University of Massachusetts, Amherst, (arguably the world’s first dissertation on Nanotechnology and Artificial Life) and those chapters of that
dissertation which have been modified and subsequently
published as refereed papers for international conferences [Post, 1976-2004]. The unification must
describe the origin of complexity in several regimes, and requires bridging certain gaps in time scales, where previous theories were limited (as with the breakdown of the Born-Oppenheimer approximation in certain surface catalysis and solvated protein
phenomena). The unification also requires bridging different length scales, from nanotechnological to
microscopic through mesoscopic to macroscopic. The unification at several scales involves nonlinear,
kinetic, and statistical analysis connecting the behavior of individual molecules with ensembles of those molecules, and using the mathematics of Wiener
convolutions, Laplace transforms, and Krohn-Rhodes decomposition of semigroups. Recent laboratory results in several countries, including the ultrafast dynamics
of femtochemistry and femtobiology, which probe the
behavior of single molecules of enzyme proteins, shed new light on the overarching problem, and confirm the practicality of that goal.
TABLE OF CONTENTS
Abstract 1
1.0 Introduction 2
1.1 Motivation 2
Table 1: Time Scales (see Appendix A) 4
1.2 Femtosecond to Picosecond 5
1.3 Picosecond to Nanosecond 8
1.4 Nanosecond to Microsecond 9
1.5 Microsecond to Millisecond 9
1.6 Millisecond to Second 9
1.7 Second to Kilosecond and my PhD Dissertation 10
1.8 Kilosecond to Megasecond 14
1.9 Megasecond to Gigasecond 14
1.10 Gigasecond to Terasecond 14
2.0 Computational Systems Biology 15
3.0 References 15
Appendix A: Biological Time Spans 24
[snip]
1.7 Second to Kilosecond, and my PhD Dissertation
Part of my motivation in considering that enzymes are
responsible for the majority of information processing in living cells is this notion. When a single cell reproduces in a fast-growing yeast or bacterium, the entire organism is replicated in roughly 15 to 30 minutes, approximately a kilosecond. If one examines how many protein molecules must be processed to
accomplish this reproduction, as DNA is transcribed into mRNA, which is then translated to protein, and each protein is oriented and placed on a membrane, the translation-orientation information is in the range of
1 to 3 x 10^9 bits of information per second. That is, a gigbit per second is processed in a single cell, with more of that information in the proteins than in the DNA and RNA combined.
I began to wonder, starting with questions that I discussed with Richard P. Feynman in 1968-1973, what is the bandwidth for propagation of information within
and between cells in a multicellular organism? Is that information transmitted in more like an analog, digital, frequency-domain, pulse-amplitude,
pulse-frequency, coding, or other coding? To answer such questions computationally, and to get models that fit existing experimental data (which in the early 1970s was of far lower resolution than today by many orders of magnitude) I was guided by Professor Bruce R. Levin, then at the University of Massachusetts, now at Emory University. Bruce R. Levin is best-known today for publications in epidemiology, microbiology,
and population genetics, such as [Michod, 1987],[Levin, 2000], [Levin, 2004].
Bruce R. Levin was on my informal Dissertation Committee, and later insisted that he certified that I
had more than the equivalent of a M.S. in Biology. He led me to computational modeling of the nonlinear differential equation system of Michaelis-Menten equations for an open system of irreversible enzymes in an enzyme chain. In such a biologically
significant system, a substrate S (which we may also denote as the zeroth intermdiate metabolite A0)
diffuses into a cell through a membrane, and an enzyme E0 catalyzes it into a first intermediate metabolite A1. This is itself a substrate for the next enzyme, E1, to catalyze into the second intermediate metabolite A2. That in turn is a substrate for the next enzyme, E2, to catalyze into the third intermediate metabolite A3. The chain continues until some metabolite An which is also known as the Product, and that product diffuses out of the system through the cell membrane.
The enzyme chain is one of the essential building blocks of any metabolism. Other building blocks are network junctions where the same product becomes the substrate for two different enzyme chains (a branch point), and the feedback loop characteristic of endproduct inhibition. Such endproduct inhibition feedback, obvious to any engineer then, and to any
Complex Systems student today, was not in fact discovered in organisms until 1959 by Umbarger
[Halpern, 1960]. Endproduct inhibition operates by this mechanism: almost always the first enzyme in the chain, E0, has its catalytic ability controlled by the concentration of the product P. This is accomplished because E0 is an “allosteric” enzyme, which can functionally bind to both substrate A0 and to product An. When bound to product, its conformation changes, and its catalytic kinetics are changed to process A0 more slowly, thus providing negative feedback.
There are several remarkable things about this mechanism:
(a) The linking of last to first step (product to enzyme which catalyzes substrate) is far more common in the metabolisms of organisms that any of an enormous number of other feedback topologies;
(b) The negative feedback both enables oscillation in the quantity of product over time, and damps that oscillation;
(c) The enzyme chain with endproduct inhibition appears to optimal with respect to 4 different definitions of optimality, as demonstrated by [xxxx].
{I think I filled this gap with a citation to Savageau}
When I coded the numerical solution of the system of
Michaelis-Menten equations for an open system of irreversible enzymes in an enzyme chain, I got
phenomena with which Bruce R. Levin was not familiar, and which he suspected was due to errors on my part.
But the phenomena, which I called “enzyme waves” were robustly present however the numerical integration was done, and the equations were confirmed by Bruce R. Levin to be the ones that he wanted me to use. Later, the ‘enzyme wave” phenomenon was confirmed to be observationally genuine, and was treated by Ilya
Prigogine before his Nobel prize. Egotist that I was, I was convinced that I understood the phenomenon better than Prigogine.
Along with Feynman, I was influenced by the many graduate seminars I’d attended at Caltech by Derek Fender, head of Caltech’s BioInformation Systems Lab for research in non-linear aspects of brain function.
Derek Fender was a pioneer in interactive online science, with rapid man-machine feedback in the course of experimentation. In Fender’s style, instead of
submitting my simulation runs as decks of punch cards, I worked with then then-unusual but today routine graphics-driven interactivity. I input values,
watched 3-D graphs being drawn and rotated, and based on visual input and the intuition it enhanced,
changing the values for the next input, or modifying parameters of the model.
I also took courses from John Todd; plus Combinatorics from Herbert John Ryser [28 July 1923-12 July 1985], one of the major figures in Combinatorics of the 20th Century; and “Error Detecting and Correcting Codes”
from Solomon Golomb, better known to computer students for his {Conway’s] invention of the “Game of Life” and for Polyominoes (1953); and by graduate courses at the University of Massachusetts under William Kilmer, a student of Warren S. McCullough [16 Nov 1898-1969], neurophysiologist and co-founder of Cybernetics.
Also in graduate school, my mentor (and Dissertation committee member) was Oliver Selfridge, grandson of the founder of “Selfridge’s” in London, acknowldged as the “Father of Machine Perception.” He reviewed the 1949 draft of Norbert Wiener’s seminal book “Cybernetics.” He was involved with McCullough, Pitts,
and other founders of the field of Cybernetics, and also in the early days of Artificial Intelligence. At MIT, Selfridge was technically a supervisor of Marvin Minsky, who is the head of Artificial Intelligence at MIT. Oliver Selfridge came to MIT from London at the age of 14 “to study with the greats.” He organized the
first ever public meeting on Artificial Intelligence (AI) with Minsky (1953). He wrote important early papers on Neural Nets (1948); and Pattern Recognition
and Learning (1955). His “Pandemonium” paper (1958) is
recognized as the beginning of breakthroughs in several fields.
Simultaneously, off-campus, I also worked directly with Theodore Nelson, who invented Hypertext and
Hypermedia, and is an acknowledged grandfather of the World Wide Web. For him, I co-implemented the world’s first working hyptertext system for personal
computers, and demonstrated it at the world’s first Personal Computer Conference — before IBM, Tandy, and Apple made personal computers. Through Ted Nelson, I
was involved with John Mauchley, who shared the patent for the Digital Electronic Computer. Mauchley built the first dual-processor (the top secret “BINAC”) for the Air Force in World War II, and then founded the
Eckert-Mauchley Computer Company, which was sold to a company that merged with a company to become UNIVAC.
Mauchley was the main force behind the UNIVAC I, America’s first commercial computer.
As all these influences converged on my online experimentation, as well as copious reading and regular trips to MIT to discuss my research with people in the AI Lab. I also “beta-tested” the
manuscript of John Holland’s breakthrough book [Holland, 1975] at Kilmer’s invitation. I accomplished many things in the short period from January 1975
(when I earned my M.S.) through June 1977 (when I abruptly left the University when the outgoing
plagiarist Chairman and incoming idiot Chairman blocked the crystallization of the ad hoc Dissertation Committee to a formal Dissertation Committee, thus
preventing my PhD dissertation “Molecular Cybernetics” from being read, and thus denying me an Oral Defense:
(a) I robustly measured the shape, amplitude, velocity, and other characteristics of the enzyme
waves under a wide range of parameters and initial conditions;
(b) I found empirical relationships between functions
perturbing the concentration of product over time with the parameters of the dynamic phenomena of the enzyme wave;
(c) I used Holland’s Genetic Algorithm to artificially evolve best-fit equations to the empirical relationships that I found in these interactive
simulations;
(d) Under Bruce Levin’s guidance, I gave my first paper on the implications of this work to the
evolution of proteins [Post, 1977];
(e) The textbooks all said that there was no closed-form solution to the nonlinear differential
equation system of Michaelis-Menten equations for an
open system of irreversible enzymes in an enzyme chain, I worked backwards from the Genetic
Algorithm-evolved equations to derive those equations from first principles;
(f) I was able to completely describe the enzyme waves by laborious matrix-exponential manipulation of the differential equations;
(g) By modeling the dynamics of the enzyme chain in response to a Dirac delta (impulse) perturbation, and using the mathematics of Wiener convolutions, I could show how the concentrations of all intermediate
metabolites and the product varied as functions of time in response to any input function of substrate as a function of time;
(h) I now expressed the matrix of input-output relationships as a transfer function, with Laplace
transforms (in some sense, the enzyme waves are in phase space more than in an obvious spacio-temporal activity);
(i) After leaving Amherst, working further with Ted Nelson and using UNIX on Bell Labs terminals in New
Jersey 1977-1979; I was hired by Boeing at the Kent Space Center, near Seattle; and perfected an elegant Krohn-Rhodes decomposition of semigroups of differential operators of the “unsolvable” nonlinear differential equation system of Michaelis-Menten
equations, giving a “one-liner” solution to what had taken 30 pages of matrices and integrals before;
(j) Marvin Minsky told me that this Krohn-Rodes work was the best application he’d seen so far (that was circa 1977) of the main theorem of John Rhodes (now
Professor, Department of Mathematics, University of
California at Berkeley) for whom Minsky had been a Dissertation advisor;
(k) The enzyme wave phenomenon for an open system of irreversible enzymes in an enzyme chain, automatically acted as a filter, specifically damping frequencies
characteristic of kT noise in the living cell, as makes evolutionary sense;
(l) Using my solutions, I could a priori determine when the enzyme chain with endproduct inhibition would be oscillatory or not, and at what frequencies and amplitudes;
(m) I could determine a priori the velocity, amplitude, and wave form of enzyme waves, and use those to measure the bandwidth for information propagation in the metabolism;
(n) I observed and wrote about what I then called “aperiodic oscillations” in the metabolism, which today would be described as saying that the enzyme systems seem to have evolved to operate “at the Edge of Chaos” (in Santa Fe Institute terminology);
(o) I began to design systems that reverse-engineered and modified metapolic systems to also act as
pulse-code-molators, shift registers, and in principle showed how to build signal processors, memories, and analog-digital computers inside a single cell, in what I contend was the world’s first dissertation on Nanotechnology;
(p) I began decades of publication of chapters of the PhD dissertation in the proceedings of international conferences, including this one, even though I was barred by national security from making certain
presentations behind the Iron Curtain relevant to my stated goal of determining how to computationally simulate the entire behavior of a living cell;
(q) Stanislaw Ulam (coinventor of the H-bomb, coinventor with von Neumann of cellular automata) was
interested in these results, as they were a continuous system, per a Bell Labs videoconference, and might have copublished with me on it had he not fallen ill and died soon after.
(r) In summary, my work of 1975-1977, as subsequently
published, answers many of the questions which were not widely asked until 25 to 30 years later, and which are central to the concerns of this conference on
Computational Biology.
[truncated here]
Best,
Jonathan Vos Post
“my 1984 paper – “Semi-meiosis As An Evolutionary Mechanism,”
Ups, I missed that. It wasn’t in Rivista – but regarding a later published answer from Davison it wasn’t well received either.
John:
Btw, since you published an answer concerning this paper 3 years later, how can you say the above? I assume your answer was to a critique in the same journal.
So I see you never read the most important paper of all “Semi-meiosis as an evolutionary mechanism,” the paper on which all the subsequent ones was based. That figures. Of oourse my work has not been well received. Neither were the works of Richard. B. Goldschmidt, the preeminent geneticist of his day, Otto Schindewolf, the greatest paleontologist since Cuvier, Leo Berg, the most distinguished Russian biologist of his day and Pierre Grasse, his French counterpart. What would you expect from a bunch of “prescribed” homozygous, atheist, sedentary worshippers of the Great God Chance? There has never been a real scientist in the whole damn lot. Would you really expect them to abandon the biggest hoax ever perpetrated on a naive public in the history of mankind? I am delighted to be identified with the real pioneers of evolutionary science, not a pontificating Darwinian mystic or a Bible-banging Fundamentalist in the lot. Go read some more Dawkins while on a “random walk” in rush hour traffic somewhere.
As for the Darwinian fairy tale, the most failed hypothesis in the history of science –
“Never in the history of mankind have so many owed so little to so many.”
after Winston Churchill
“Darwinians of the world unite. You have nothing to lose but your natural selection.”
after Karl Marx another “prescribed” loser.
You bore me to tears.
Who or rather what is next?
It is hard to believe isn’t it?
I love it so!
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
John:
To summarize:
You are *not* going to answer any of the questions anyone has put to you about evidence against your theory; you aren’t going to present any of the evidence that you claim is in favot of your theory, and you aren’t even going to bother to provide a sufficient citation of your own work to allow us to find a copy to look at. The only thing you will do is sneer, insult people, and scream at the top of the lungs how you’re right.
If you’re not going to actually *discuss* your theory or any of the questions about it, just why is it that you are even spending the time to come here and comment?
All my work is avaiable at ISCID’s “brainstorms” forum and was also at Uncommon Descent until the biggest bully in cyberspace, David Springer, purged my papers (twice). The proper place to question my papers is in the refereed literature where they were originally published. The mindlesss drivel that takes place on internet forms like this one means absolutely nothing in any event.
Have a nice “groupthink.”
I am not here to answer questions in any event. I am here to expose ideological bigotry wherever I find it or it finds me.
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
Let’s see…
The only place to question your work is in “the refereed literature where they were originally published”.
But despite three requests, you have not provided bibliographic information that would allow us to even identify which “refereed” journal it was published in. The closest you’ve come is a reference to a discussion forum where, apparently, your papers are posted… somewhere. Visiting ISCID and looking into brainstorms, I see your name in several of the top links.. But following those links doesn’t lead to any “refereed publications”; it just leads to massive threads of you insulting people. When I follow the link for your “Prescribed Evolution” paper, the first thing that comes up is:
In other words, more obfuscation and insult in place of actual responsive discussion. Sorry, but I have no intention of wading through 14 pages of discussion for that one paper just to try to find a bibliographic reference to where it was published. I’ll be glad to read your papers and respond to them in a mutually agreeable forum, but only if you’re willing to provide a damned reference. For someone who claims to be a scientist, you’re astonishingly reluctant to grab a bibliographic reference out of your CV!
And I also must note: you have come back to this blog no fewer than 5 times (based on the timestamps of your posts) in order to do nothing more than fling insults?
I do kind of have to say, I think “homozygous” may be the most poorly contrived attempt at an insult I have ever seen.
John:
I just felt that I had to add, in response to your “they laughed at Goldschmidt, Schindewolf, …”
That’s a variation of a classic line quoted by pretty much every crackpot around – from the looniest of creationists, to
newage flakes like the Raelians, to two-bit nutters like Ted Holden.
And the only response to it is… Yes, they laughed at Goldschmidt, they laughed at Schindewolf. And they also laughed at Bozo the Clown.
Torbjorn:
Don’t worry about the “troll” comment. My personal insult
policy is intended to prevent discussions from degenerating into name-calling. But when someone *is* trolling, pointing that out isn’t what I would consider a personal insult. Attaching extra insulting adjectives like “stupid troll” or “asshole troll” would fall under my idea of insults; just pointing out that someone is trolling really isn’t. (Same goes for things like “liar”; if someone is caught in a lie in the messages, pointing that out isn’t an insult, etc.)
I should probably write the comment policy out in an organized way, and put it in the “About” section of the blog.
“they laughed at Goldschmidt, Schindewolf, …” set to the tune of:
===========
THEY ALL LAUGHED
(George Gershwin, Ira Gershwin)
The odds were a hundred to one against me
The world thought the heights were too high to climb
But people from Missouri never incensed me
Oh, I wasn’t a bit concerned
For from hist’ry I had learned
How many, many times the worm had turned
They all laughed at Christopher Columbus when he said the world was round
They all laughed when Edison recorded sound
They all laughed at Wilbur and his brother when they said that man could fly
…
They all laughed at Whitney and his cotton gin
They all laughed Fulton and his steamboat, Hershey and his
chocolate bar
Ford and his Lizzie, kept the laughers busy, that’s how people are
…
But oh, you came through, now they’re eating humble pie
…
For ho, ho, ho! Who’s got the last laugh?
Hee, hee, hee! Let’s at the past laugh, Ha, ha, ha!
Who’s got the last laugh now?
===========
But, you know, no last laugh here until we see your seminal paper. I just exposed one of mine, warts and all.
It’s called the scientific method, you know. Publish, respond to feedback?
Let the record show that Mark C. Chu-Carroll is oblivious to my several papers all of which have been published in refereed journals. Only the Manifesto has not been published and it will be soon.
The ONLY place to respond to published work is in the venue in which it was presented, a refereed journal. Write that down … pointless insults deleted by MarkCC … Naturally –
I love it so!
It is hard to believe isn’t it?
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
Speaking of Gregory Chaitin,
Gregory Chaitin on physics and mathematics
Gregory Chaitin: Well, my current model of mathematics is that it’s a living organism that develops and evolves, forever. That’s a long way from the traditional Platonic view that mathematical truth is perfect, static and eternal.
Cristian Calude: What about Einstein’s famous statement that
“Insofar as mathematical theorems refer to reality, they are not sure, and insofar as they are sure, they do not refer to reality.”
Still valid?
Gregory Chaitin: Or, slightly misquoting Pablo Picasso, theories are lies that help us to see the truth!
Let the record show that the most substantive of John Davison’s last 7 posts was the one where he corrected his arithmetic on the age of the paper written in 1984. The rest amount to “Nyah, nyah, nyah”. It’s another page out of the troll/crank’s handbook to spend more time explaining why you won’t support your assertions, than it would take to actually do so.
John:
How am I supposed to “be aware of” your papers, when you absolutely refuse to provide any information about where they
were published?
How is anyone supposed to respond to a piece of work in an allegedly refereed journal, when you refuse to specify what journal it was published in?
And what kind of scientist refuses to discuss their work? You keep flinging insults – but I’ve continued to attempt to have a polite discussion with you. I have not insulted you,
called you names, thrown accusations, sneered, or flamed you. All I’ve done is repeatedly attempt to have a civil discussion. To which you have not yet responded with anything remotely resembling a substantive response. Your entire time on this blog – 6 visits now! – has consisted of nothing but constant insults, sneers, and content-free posturing.
It’s no wonder you’ve managed to get yourself banned from so many forums! You contribute absolutely nothing to the forums where you post; you just post content free rants full of insults.
You’ve been warned enough times. You’re welcome to post here – but starting with your most recent post, I will edit comments to remove any pointless insults.
Does this help?
DAVISON, J., 1984 Semi-Meiosis as an Evolutionary Mechanism. J. Theor. Biol. 111:725-735.
DAVISON, J., 1987 Semi-Meiosis and Evolution: A Response. J. Theor. Biol. 126:379.
John:
Now I see why you refused to provide bibliographic references. Based on the table of contents, it appears that your “refereed publications” were not in fact reviewed journal papers, but rather letters to the editor! And according to the journal information at the publisher (Elsevier), letters to the editor are *not* subject to a full review process.
That does rather take the gloss off, now doesn’t it?
Alas, since the 1984/87 editions of the journal are so long out of print, the only way to get a copy of the papers is to pay $30 (for a 2 page paper!) each. Sorry John, but $60 to read your 20 year old papers is a bit beyond my budget. I’m still willing to read and discuss them, but you’ll have to tell me where I can get a copy without shelling out $15/page.
The first can be downloaded here:
Silly linking. Try here:
http://www.uncommondescent.com/documentation/Semi-Meiosis.pdf
I’ve just quickly read DAVISON, J., 1984 Semi-Meiosis as an Evolutionary Mechanism. J. Theor. Biol. 111:725-735. It was a qualitative, rather than quantitative analysis, drawing from other studies, with no new experiments or methodologies reported.
I shall need to read it several more times over next couple of weeks, and check some references. It draws on qualitative material with which I’m not at all familar. My first impression was that it was well-written (in prose style), interesting, and provocative. To a first approximation, it seems like a professional piece of work.
The references to Lyell and Darwin and Weismann were ambiguous in effect; some would see them as early warning of crankiness in trying to give a “theory of everything.” But I do that too. Others would see them as an attempt to ground the discussion in the major historical works.
The rate and mechanism of reproduction assuredly has changed over time, and differs widely between organisms. It is not absurd on its face that the major changes have declined in some genera and fine-tuning is now dominant.
I’ve read many worse papers at Theoretical Biology tracks of interdisciplinary conferences. Thank you, John Davison, et al., for making that available. Could have spared a lot of vitriol if it were available earlier, but, in any case, it seemed to be be science. I’d be interested in what feedback it received in the literature.
Thank you, Mark, for pulling teeth to get that here. The jury is out, but at least a case has been made for a non-standard theory.
I’ve myself finally had a chance to read this properly.
I have to wonder about this: “Tha translocation or inversion of a chromosome is an all-or-none event, an occurence which is incompatible with the Darwinian notion of gradualism”. Is it really? Apparently large scale chromosomal changes can lead to no apparent phenotypic change. And the following sentence on “no missing links” harks back to views from the 1930s and 1940s, views that have subsequently been shown to be wide of the mark as new intermediate forms are being discovered all the time (and in the geographical and geological locations in which they’d be expected). The continuum is vastly clearer than it was 60 or 70 years ago.
Given that he says major evolutionary change came to a standstill after the evolution of sexual reproduction, and given that across the whole animal kingdom there’s only one group that entirely does without sex (the bdelloid rotifers), I’d say the latter would seem to contradict the former. There has been vast evolution of new forms and types since sex evolved.
I’ve found another of Davison’s papers:
http://www.uvm.edu/~jdavison/dpaper.html
In it he says “For purposes of argument, let us accept the conclusion that evolution is largely finished.” This seems to answer my question above – he thinks evolution has stopped.
I’m afraid to say that this is contradicted by the evidence. He may have had something interesting to say in the 70s and 80s, but like other superceded theories, its time has passed and the body of theory has moved on. It’s a bit more interesting than (to take this tangent back on topic) Salvador Cordova’s misunderstandings and misrepresentations of CAs and GAs, but it’s clear that John Davison is holding on to an outdated and wrong hypothesis, and his aggression and evasion is a symptom of that. I thought that behind the bizarre posting style there might have been some substance in his old papers, but there isn’t really even that. It’s a bit sad.
One last comment on this, then I think I’ll be putting John Davison out of my mind. Here’s one more of his papers:
http://www.uvm.edu/~jdavison/davison-manifesto.html
In it he says “Perhaps the most compelling feature for the Darwinists resides in their persistent conviction that all of evolution is the result of blind chance. In so doing, the Darwinists refuse to consider that evolution might be subject to laws and precise mathematical relationships such as those that govern virtually every aspect of the inanimate world.”
The first sentence is the classic straw man, the second is just a blatant misrepresentation (as most evolutionary biologists accept that evolution is the result of some generally applied principles).
This takes John Davison well away from eccentric theorist and right into full-blown crank territory. He’s as bad as any of them. Hopefully he’ll wander off to another board soon.
Charlie B, whoever that really is of course.
Tell us what these “generally applied principles” are. I for one would like to know. Gephen J. Stould once compared evolution to a “drunk reeling back and forth between the gutter and the bar room door.” He also claimed “Intelligence was an evolutionary accident.” Are those the “principles” you had in mind?
It is hard to believe isn’t it?
I love it so!
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
Just for the record, as JVP has quoted Ira Gerswin’s lyrics, nobody “laughed at Christopher Columbus when he said the world was round” as, in the 1490’s, every educated European knew that the world was round and had done so since at least Eudoxos!
John A. Davison,
Well, for example, sexually reproducing species will generally be very symmetrical in appearance. That’s because disease and misadventure tends to damage symmetry, and so symmetry is a marker of good health, and thus fit genes. As a result a symmetrical appearance is sexually selected for. Now, there are other reasons for symmetry, but contrast the poor bilateral symmetry of our viscera (which is not visible to our mates) to the very good bilateral symmetry of our outward appearance.
Another principle is that not all changes to the genome of a species are as likely as one other of becoming prominent in a gene pool. This is due to either the “Natural Selection” or the “Sexual Selection” of the theory. It’s rather fundamental to the theory (and is right in the name), but it’s amazing how many people miss this principle.
How about a non-principle: the only type of mutation is a pointwise (base pair) mutation. Not part of the theory. It’s amazing how many people think it is.
I’m not Charlie B or an evolutionary biologist, but this is hardly obscure stuff. Surely you’re already familiar with all this?
Woah. Internet Deja-Vu. Sal runs away.. JAD comes and waves his arms whilst avoiding any substance. Is it Groundhog day?
There is absolutely nothing in the Darwinian paradigm that ever had anything to do with creative evolution beyond the elaboration of intraspecific varieties which are all evolutionary dead ends anyway.
“A past evolution is undeniable, a present evolution undemonstrable.”
John A Davison
John:
You keep saying that. And yet, whenever anyone asks you to present any evidence, you just ignore them.
Let’s try a slightly different tack, one which falls more into the mathematical side of things, which is what the kind of thing that I’m most interested in. What I’m interested in is what would form a valid counterexample to your argument. In the interests of fairness, I’ll answer the question from the other side – that is, what would form a counterexample to my criticism of your argument.
For you: How do you define the difference between “elaboration of intraspecific varieties” and larger evolutionary changes? At what point does a new, previously unknown genetic trait become significant enough that it would form a counterexample to your argument?
For me: what would convince me that I’m wrong that genetic variation and speciation is a product of mutation?
My answer: there are two distinct kinds of counterexamples/counterproofs that would be convincing to me. They’re both mathematical (this is a math blog after all!), meaning that what I’m after is not a specific example (although specific examples would be interesting), but rather a careful logical modeling of the process that
allows you to present a proof of one of your claims.
What I’d like to see is a demonstration of anything that prevents genetic change/mutation from “adding up”. The claim that evolution cannot create large changes is, ultimately, a claim that you cannot go a long distance by taking small steps. To show that, you need to do one of two things: show that there’s something that stops the addition of changes beyond some maximum variation; or show that changes cannot be changed. These two correspond to the two forks of your claim: that there is no “creative evolution beyond elaboration of intraspecific varieties” (ie, there is a barrier preventing change beyond
the level of intraspecific variation), and all variations are “evolutionary dead ends anyway” (you can’t combine changes, because any change is a dead-end).
So – either of two things would convince me that you’ve got a serious theory and that I’m wrong: One would be a demonstration of a barrier of any kind that will halt any genetic changes beyond a certain maximal change – so that the clearly observed intraspecies genetic changes that have been observed are permitted, but larger changes that could lead to significant modifications/speciation are not possible.
The other would be a demonstration that all observed genetic
changes within species are evolutionary dead ends.
What would convince you that gradual genetic change can lead to “creative evolution”?
You keep saying that. And yet, whenever anyone asks you to present any evidence, you just ignore them.
This has been going on for years, Mark, YEARS. Davison blasts onto a discussion board (or more recently, blog comment sections) ‘daring’ anyone to find fault with his ground-breaking, earth shattering, non-experiment based anti-Darwinian publications, then when such criticisms are laid out, he insults and ignores.
I must say that especially like his claim that evolution is “instantaneous” – even though it doesn’t happen anymore -because mutations themselves occur very quickly.
A howler, to be sure.
My suggestion is to just ignore him. Like a herpes outbreak, his presence is unpleasant and it is nearly impossible to pay no attention to, but, in time, it will go away…
Zero:
I deleted your comment as spam. Your endless, pointless numerology has absolutely nothing to do with this post. If you want to post more of your silly gibberish, you can wait until I post something else about numerology; otherwise, take
it somewhere were someone cares.
slp wrote:
As I mentioned earlier, there are tools to help.
John:
Which probable existence of critique I note you don’t deny, contrary to your previous claim.
Ducking the fact that Rivista isn’t peer-reviewed, I see.
’nuff said.
Mark:
Noted. I was posting under influence – it may make for a good party, but not the best logic. I also went on to answer Davison, for example. Bad, bad logic. 🙂
[/Mark]
[off-topic]
Wow, Chaitin is on my side! (Doing an uncalled for victory dance. 😉 Can we consider putting math and CS in with the sciences now?
Chaitin: “for years I’ve been arguing that information-theoretic incompleteness results inevitably push us in the direction of a quasi-empirical view of math, one in which math and physics are different, but maybe not as different as most people think. As Vladimir Arnold provocatively puts it, math and physics are the same, except that in math the experiments are a lot cheaper!”
Totally on my side. 🙂
[/off-topic]
Charlie:
By explicitly disregarding gene duplication and implicitly mechanisms as genetic drift and fixation. Perhaps he can’t be bothered to read the literature? 😉
Nice dissection of his work, btw.
Wrong again.
Rivista is most definitely peer reviewed and always has been. As a matter of fact Sermonti no longer will accept manuscripts from me and will not even answer my emails as to why. I certainly must be doing something right. I have now been able to alienate both the “Fundies” and the “Darwimps,o” something I have been striving to accomplish for quite some time now.
“The main source of the present-day conflicts between the spheres of religion and science lies in the concept of a persnal God.”
Albert Einstein
Thanks for exposing your ignorance. That way I don’t have to do it.
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
Chaitin agrees with me, too! I was going to post something in one of these overloaded “Most Active” threads about that very topic; fortunately, Chaitin has made my point more succinctly than I could.
By the bye, here’s the source for the Chaitin quote.
Since it is infamous among biologist, we can assume it isn’t peer-reviewed. As I have now pointed out three times.
John:
Since I’m not familiar with Rivista, I decided to do what I would normally do if I were considering submitting a paper to a journal. I went to check its website, and look at the information for authors.
Nowhere on the Rivista website (http://www.tilgher.it/(tbq4uemxhqupl4mpabgrok45)/index.aspx?lang=eng&use=1.1&tpr=4) could I find any mention of the review process. Nothing, nada, zip.
Contrast that against a real peer reviewed journal; the one that I thought of off the top of my head is one of my favorite CS journals, the IEEE Transactions on Software Engineering (TOSE). TOSE has a link of the front page “Information for Authors”, and on that page (http://www.computer.org/portal/site/transactions/menuitem.eda2ca84d8d67764cfe79d108bcd45f3/index.jsp?&pName=transactions_level1&path=transactions/tse/mc&file=author.xml&xsl=article.xsl&😉 they have extensive information about the review process, how to suggest who should/should not review the paper; how to request blind review and how the paper needs to be written for a blind review; how authors are expected to respond to criticisms in the review, etc.
To be sure that the complete review process information wasn’t just a Math/CS thing, I also checked a variety of journals on biology and genetics at Elsevier and Springer, two of the major scientific journal publishers. Every single one without exception had information about the review process prominently placed.
My conclusion? That Rivista is most likely not a serious peer reviewed journal.
Likewise, the 1984 and 1987 papers in the Elsevier journal that you refused to provide a citation for were not peer reviewed papers, but minimally refereed letters to the editor.
“The main source of the present-day conflicts between the spheres of religion and science lies in the concept of a persnal God.”
WARNING: Clasic Creationist/IDist Quote Mining Alert! But most readers here already knew that, I’ll wager.
It’s possible (although doubtful) that Mr. Davison is not aware that Einstein doesn’t use the word “god” the way the typical religious person does. Big Al didn’t utter this statement in support of faithful belief but, rather, to point out that dogmatic belief in that which cannot be demonstrated can prevent believers from accepting that which has been demonstrated.
For the possible benefit of Mr. Davison:
“It seems to me that the idea of a personal God is an anthropological concept which I cannot take seriously. I also cannot imagine some will or goal outside the human sphere…”
— Albert Einstein, “Religion and Science,” New York Times Magazine, 9 November 1930
I think it’s clear what Mr. Davison’s goal is (being inside the human sphere) but, again, it’s a pretty safe bet that he knowingly chose to exclude or ignore the contextual portion of his Einstein quote.
Eistein wasn’t a philosopher per sec.
As to the phrase “personal God”, I encourage all to disect it semantically. What is the avowed nature of “God”? omnipresence, omnipotence, etc. So how does that differ from existence itself? And, “personal”? Well, all that I am is part of existence, so that omnipresent omnipotence comprises all of me down to the quantum level. How can one get more “personal” than that?
So in a sense, even a hard core scientist should be able to acknowledge that a “personal God” exists: EXISTENCE. (Remember that words aren’t mathematics and can carry many different and often contradictory connotations.)
Of course, I know what the simplistic IDists mean by the term “personal God”, but that’s exactly why they should dissect it! An anthropomorphic charicature is NOT isomorphic with the God qualities they themselves avow; God can’t literally be all things everywhere and a single, separate, anthropomorphic personality at the same time (except either as each one of us in an amnesiac sense – see Zen Buddhism – or in a poetic or metaphorical sense).
If anyone is interested in evaluating Davison’s dead-tree published (and/or intended for publication) papers dealing with evolution there is a collection of them in chronological order hanging out at:
http://www.uncommondescent.com/archives/1647
password: phylogeny
TheUsualSuspect
Thanks for the link. I hope you all enjoy some real evolutionary science. (MarkCC: Gratuitous insults deleted.)
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
Speaking of Ed Brayton over at Dispatches, he has altready banned me.
So has that (MarkCC: insult deleted) Falan Ox (MarkCC: insul deleted.). Imagine, if you can, being banned from a thread with the title, “John Davison, this is for you,” after that thread has chalked up 400 comments.
Or how about being banned after evoking 60,000 views on my thread “God or Gods are dead but must have once existed.” over at Dickie Dawkins’ fan club?
It is hard to believe isn’t it?
I am curious to find out how long you folks will tolerate a real honest-to-God bench scientist. Incidentally, if any of you are interested in a serious dialogue, I recommend “brainstorms.” For some reason they haven’t banned me yet. I’ll leave a light on for you. I think I am still extant at Telic Thoughts. I haven’t checked lately.
“God designed the stomach to vomit up things that were bad for it but he overlooked the human brain.”
Konrad Adenauer
“Since God found it necessary to limit man’s intelligence why didn’t he also limit his stupidity?”
ibid
I love it so!
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
Dr. Davison asked, “I am curious to find out how long you folks will tolerate a real honest-to-God bench scientist”
indefinitely, I suspect – provided that said scientist actually discusses bench science and keeps to the subject of the thread.
Having reviewed your publications, however, I am concerned by the lack of any ‘bench science’ in them. They appear to be almost entirely digests of other people’s work. Have you actually done any ‘bench science’ on this subject that we can discuss?
I await your references.
John:
Given that you don’t participate in discussions, but merely fling insults I don’t find it at all surprising that people have banned you. I’ve done everything in my power to try to enable a civil, scientific discussion here, and you’ve refused to do anything except insult everyone in sight.
And you are not an “honest to god bench scientist”. A bench scientist does experiments, publishes results, and participates in discussions with other scientists. You don’t do anything resembling experiments; since you don’t do experiments, you can’t publish the results of any experiments, and you seem to be completely incapable of discussing anything with anyone.
John:
One thing I forgot in my previous comment. I find it truly appalling that in the same breath, you complain about how everyone has banned you from their discussion areas, and then recommend that people come post in the message area at ISCID – a site which is well known for banning commenters for any disagreement with the moderators.
Davison apparently last did bench-science in or prior to 1976, when his last peer-reviewed experiment-based paper came out. Then, an 8 year drought, then his speculative semi-meiosis paper, then a 3 year break followed by a response to apparent criticisms of his semi-meiosis paper, then a string of self-referential, out of date book referencing essays in Rivista, whose editor in chief is an avowed anti-Darwinian and who has also allowed ‘papers’ by creationist Jerry Bergman and Jon Wells to make it into print. It was at one time a respectable journal, now it seems little more than an outlet for fringe anti-Darwin cranks.
http://www.uvm.edu/~jdavison/jad-cv.html
For a joke, go take a look at one of JAD’s many websites. PZ has the write-up: http://scienceblogs.com/pharyngula/2006/08/update_your_blogrolls_um_not.php
JAD has only one post per blog, and when it gets too many comments (mostly from himself) he doesn’t post another article, he starts a new blog!!
There is no need for me to do experiments any longer. They are being done for me in molecular biology laboratories all over the world. Nothing now being revealed can ever be reconciled with the Darwinian hoax, absolutely nothing. I thought everybody knew that by now. Darwinism never had anything to do with evolution beyond the elaboration of intraspecific varieties none of which are incipient species anyway.
Darwimpianism, as I have learned to call it, should have died in 1873 when St George Jackson Mivart asked the question – How can natural selection have been involved with a structure which had not yet appeared?
The whole business was planned from beginning to end and the end is now. Get used to it. Leo Berg did, Otto Schindewolf did, Pierre Grasse did and so have I.
It is hard to believe isn’t it?
I love it so!
Have a nice “groupthink.”
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
Dr. Davison, it would appear that you are not, in fact, a ‘bench scientist’. How then can we have any meaningful discussion of bench science?
While your invective is amusing (and occasionally instructive), it fails to advance your cause. What specific molecular research do you claim supports your case?
You can’t bluff your way out of the hole. Show us your hand.
I was a bench scientist for half a centrury. Now I am letting others support my predictions. As for evidence, both direct and indirect, I refer you to –
A Prescribed Evolutionary Hypothesis, Rivista di Biologia 98: 155-166, 2005
You will find it discussed at some length at “Brainstorms,” and also at EvC. where it was “Showcased” much to their chagrine. If you have any questions you may ask them here or at “Brainstorms” but I’m afraid that you will have to prove to me first that you have read it. You will also have to be admitted at “Brainstorms” which, judging from your mouth, won”t be very likely.
I do not bluff, I enlghten but only those who choose to learn. My “hand” is published Bronze Dog whoever that is. Where may I find your evolutionary papers?
“A past evolution is undeniable, a present evolution udemonstrable.”
Johh A, Davison
So, uh, what happened to Sal?
Having trouble finding the actual paper, rather than references to it and short quotes. Seems like you’re trying to argue that convergent evolution is evidence of design, almost as if you’re saying that creatures from different times and parts of the world will never run into similar problems and evolve similar solutions. I pick up a whiff of argument from personal incredulity mixed with a false dilemma.
As for my anonymity: So what? I’m not claiming to run experiments. I don’t need to do any of that to point out logical fallacies.
During my little search, noticed you bringing up that randomness fetish all the IDers out there seem to have, as if evolution was determined solely by chance. It doesn’t help your case if you don’t even understand evolution before you criticize it.
John: Where may I find your evolutionary papers?
You seem to think that this silly rejoinder somehow helps your case. The debate is not between you and Bronze Dog; it’s between your position and the mainstream scientific position. Yours is represented by a single article in Rivista. Ours is represented by hundreds of thousands of papers in more respected venues. Pointing out that you’re published only serves to remind us of this huge discrepancy.
“So, uh, what happened to Sal?”
I presume he chickened out.
I hate to have to be the one to defend Sal, but he did mention that he was going to be away until tuesday. 🙂
But this IS Tuesday…. isn’t it? Doesn’t matter – the computability posts are much more interesting than watching Sal fail to address any of the real issues with his contentions.
Scarlet:
D’oh, you’re right. I’ve been dealing with sick children and a sick dog since sunday, so I lost track. It feels like it’s just been one long day. I was convinced it was monday!
(Incidentally, the kids are now fine; the dog had some minor surgery today, and is also now A-OK.)
Davison is just another puppet out of the Behe camp: I don’t have to perform experiments, and the burdon is on my opponents to disprove me.
[yawn]
I am no one’s puppet and I wouldn’t give you as nickel for Bichael Mehe, Jillip Phonson, Dilliam Wembski, Wonathon Jells or the whole damn Discovery Institute. I have been ignored by the whole damn bunch for the same reason I am ignored and vilified by the Darwimps. So were my sources also ignored and for exactly the same reasons. Both sides of this stupid debate are wrong, dead wrong. The truth lies elsewhere and I know where that is.
Incidentally, mainstream science has historically always been wrong, dead wrong. I thought everybody knew that by now. Remember Phlogiston, and the Ether? Well now you have Natural Selection, Random Mutation, Genetic Drift and Sexual Reproduction, none of which ever had anything to do with creative evolution.
“The one thing we learn from history is that we don’t learn from history.”
anonymous
“A past evolution is undeniable, a present evolution undemonstrable”
John A. Davison
You presumed wrong. 🙂
It turns out von Post, it would be a little bit less embarassing for someone of your credentials to actually refer to the correct channel that is being discussed rather than one that was not being discussed.
Would you, when asked to calculate channel capacities for twisted pair, come back with calculations for optical fiber? Your little Curriculum Vitae dump was about as misdirected as that…
You misrerpresented my argument and what I was referring to. I was not refering to the channel capcities of proteins or metabolisms. I was referring to the channel capacities of evolutionary processes with reference to population rescources like numbers of organisms, numbers of mutations, etc. I was not referring to the channel capacities of proteins but the channel capacities of evolutionary mechanisms.
You might try to actually address the argument I put on the table rather than fabricate an argument I didn’t put on the table.
So, von Post. Care to state the channel capacities of evolutionary mechanism in the real world in terms of realistic populatlion parameters and mutation rates, etc.? Or will you come up with strawman arguments and literature bluffs like you did in you previous comments, and start discussing channel capacities of entitities that I was not talking about?
Shallit showed himself to be pathologically unwilling to accurately represent Dembski’s works. If one wishes to refute Dembski’s works, one should not resort to strawman misrepresentations of what Dembski actually wrote, Shallit did a lot of that, and when I confronted him at Pandas Thumb on some of it, he rather embarrassed himself….
The issue not how much you know about information theory or whether you know Greg Chaitin. The issue is whether your recycling of Shallit’s strawmen is appropriate to characterizing Dembski’s work.
Dembski referred to an instance of an algorithmically compressible string as an example of a specification of CSI, but by no means is CSI restricted to being specified by algorithmically compressible strings. Shallit seemed completely willing suggest the strawman that Dembski’s CSI refers only to algorithmically compressible strings…
The way bits of CSI are calculated is very consistent with Shannon’s conception of information and the reduction of uncertainty.
Therefore your claim:
is unfounded. To prove your claim, you will need to cite the specific sections of relevant ID literature, not recycled Shallit-Elsberry-Perakh. I don’t think there is sufficient evidence that your claim is correct, it is merely a recycled misrepresentation.
[And no, just because one instance of CSI may be specified by an algorithmically compressible string, can one generalize an instance of an algorithmic compressible string to all instances of CSI.]
Are you sure about that?
Sal responded, “You presumed wrong. :-)”
The jury is still out, Sal. Your track record is not in your favor.
Sal also said, “You misrerpresented my argument and what I was referring to. I was not refering to the channel capcities of proteins or metabolisms. I was referring to the channel capacities of evolutionary processes with reference to population rescources like numbers of organisms, numbers of mutations, etc. I was not referring to the channel capacities of proteins but the channel capacities of evolutionary mechanisms.” This is part of your problem. The concept of a ‘channel’ in the information sense (which is how you’re using it) is inappropriate in the context of a time-based evolutionary process.
Besides, you made the claim; YOU specify what the ‘bandwidth’ of the evolutionary channel is. Unless you can actually show some real numbers and quantify your claims, all you are offering is yet another ‘argument from ignorance’. Since this is your usual procedure, I don’t hope for much, but you might want to give it a chance.
Specify the ‘channel’ capacity of the evolutionary process; show how YOU determined it; show what the necessary capacity would be to evolve some simple biological structure.
Show the math. And not by cutting and pasting, please – your cut-n-paste jobs never actually support the points you are trying to make.
Sal:
It’s fascinating to see you challenging John von Post to show some actual numbers/calculations about the bandwidth of an evolutionary process over time, when you have never presented any numbers or calculations for your argument. You made the assertion about “channel capacity”, but without bothering to define the channel in a mathematical way! How is anyone supposed to refute that mathematically? It’s just another round of the same old nonsense: you don’t define your terms precisely, so then you can wave off any critique by saying that it’s using the wrong definition.
It’s the same game that Bill Dembski constantly uses for CSI – he refuses to settle on a single consistent definition of “specification” – and then he can cut down any criticism of the concept of specified complexity by saying “you got the definition wrong”.
If you want to make an argument from Shannon theory that the “channel capacity” of the evolutionary process is insufficient, it’s up to you to make that argument: you need to define what you mean by the channel, show how you did the computation of the Shannon capacity of
that channel, and show that the amount of information required by an evolutionary process exceeds that channel capacity. You haven’t done that – and so there’s no way to meaningfully refute your claim, because it’s not sufficiently defined for anyone to be able to tell whether or not they’re addressing the actual argument you claim to be making.
Sal: Shallit showed himself to be pathologically unwilling to accurately represent Dembski’s works.
You’ve made this accusation in at least two other forums, and then abandoned the discussions it when it was shown to be ridiculous. Recycling it again doesn’t make it true.
Sal: Dembski referred to an instance of an algorithmically compressible string as an example of a specification of CSI, but by no means is CSI restricted to being specified by algorithmically compressible strings.
Okay, Sal, give us an example of CSI that isn’t expressible as an algorithmically compressible string.
MarkCC wrote:
This reminds me of Velikovsky’s Worlds In Collision nonsense. His claims about any of the astrophysical phenomena he proposed — say, Venus being born as a comet ejected from the Jupiter system — were so vague that Sagan had to try inventing specific models just to get quantitative results and show that the whole thing was poppycock! At best, this was a waste of time, and at worst it was counterproductive. One hopes that skeptics have learned from past experiences.
Vagueness has always been the pseudoscientist’s friend. Anything which sounds mathematical but is never tied to an exact definition is a sure winner; the only better circumstance is when we’re told that there is an exact definition, but only in the author’s self-published $19.95 treatise explaining how his work has been suppressed by the scientific community. . . . To pick a modern example out of a cluttered field, try the Cognitive-Theoretic Model of the Universe, which is just all kinds of silly.
Mark C. Chu-Carroll : I’m curious to know if you feel it’s possible to in any meaningful way calculate the odds of DNA forming in our universe in 20 billion years since the “Big Bang” based soley on initial physcial/energy conditions.
I tend to think that any such calculation is beyond our ability to make to any truly meaningful or accurate level, but I’m not a mathematician and would like your opinion.
Norm:
No, I don’t believe that it’s possible to generate a really meaningful probability for the formation of DNA via purely natural means. I don’t believe that we have nearly enough information to be able to do that. When we try to put together probability estimates, so many of the numbers involved are just guesses – and you can’t produce a meaningful probability estimate from random guesses.
Just for example, here are a few crucial things that we don’t know, but that we would need to know to make a meaningful probability estimate.
(1) How big is the universe? In particular, how many places are there in the universe where a molecule like DNA could conceivably form?
(2) What does an initial self-replicating molecule look like? Is DNA a starting point, or is it a product of an evolutionary process? If it’s a product, what was the precursor to it?
(3) What was the environment like at the time of the formation of the initial self-replicating molecule?
Thanks, Mark. Your answer was as I expected, and if I could intuit it, it must be pretty damn obvious (lol).
Of course, even if such a calculation could be made, it still wouldn’t prove cosmic ID in any anthropomorphic sense.
Since the basic issues re ID are so obviously untestable, it amazes me that folks can get into such convoluted arguments about it over evolutionary biomechanism details.
Reading through the various posts on this subject on these science blogs has been an education for me. If nothing else, I’ve realized that the referencing of “sheer chance” in any ID argument is a red herring, since any initial conditions of the universe preclude sheer chance by definition.
And, of course, string theory, multiple-universe theory, and possible misapprehensions of the size or age of the universe all add to the high degree of mootness in the ID statistical argument.
With some humilty, I’ll admit that my 3-decades-old Channel Capacity work would need considerable effort to expand into a Universal Evolutionary Channel Capacity result. But that was not what I sought in 1973-1977.
As to how hard experiments with DNA and its components can be, and why they might relate to Neodarwinian versus Intelligent Design arguments:
http://www.sciencedaily.com/releases/2007/01/070109142101.htm
Source: Ohio State University
Date: January 10, 2007
New Study Sheds Light On ‘Dark States’ In DNA
Science Daily — Chemists at Ohio State University have probed an unusual high-energy state produced in single nucleotides — the building blocks of DNA and RNA — when they absorb ultraviolet (UV) light.
Computer-generated image of DNA double helix. (Courtesy of the National Human Genome Research Institute)
This is the first time scientists have been able to probe the “dark” energy state — so called because it cannot be detected by fluorescence techniques used to study other high-energy states created in DNA by UV light.
The study suggests that DNA employs a variety of means to dissipate the energy it absorbs when bombarded by UV light.
Scientists know that UV light can cause genetic alterations that prevent DNA from replicating properly, and these mutations can lead to diseases such as cancer.
The faster a DNA molecule can dissipate UV energy, the lesser the chance that it will sustain damage — so goes the conventional scientific wisdom. So the dark states, which are much longer lived than previously known states created by UV light, may be linked to DNA damage.
The existence of this dark energy state — dubbed n(pi)* (pronounced “n-pi-star”) — had previously been predicted by calculations. Other experiments hinted at its existence, but this is the first time it has been shown to exist in three of the five bases of the genetic code — cytosine, thymine and uracil.
The detection of this dark state in single bases in solution increases the chances that it may be found in the DNA double helix, said Bern Kohler, associate professor of chemistry at Ohio State and head of the research team.
The Ohio State chemists determined that, when excited by ultraviolet light, these three bases dissipate energy through the dark state anywhere from 10-50 percent of the time.
The rest of the time, energy is dissipated through a set of energy states that do fluoresce in the lab. These “bright” energy states dissipate the energy much faster, in less than one picosecond.
A picosecond is one millionth of one millionth of a second — an inconceivably short length of time. Light travels at 186,000 miles per second, but in twenty picoseconds it would only travel just under a quarter of an inch. Still, a picosecond is not so fast compared to the speed of some chemical reactions in living cells.
In tests of single DNA bases, the dark state lasted for 10-150 picoseconds — much longer than the bright state. The chemists reported their results in the Proceedings of the National Academy of Sciences.
“We want to know, what makes DNA resist damage by UV light?” said Kohler. “In 2000, we showed that single DNA bases can dissipate UV energy in less than one picosecond. But now we know that there are other energy states that have relatively long lifetimes.”
“Now we see that there is a family of energy states in DNA responsible for energy dissipation, and this is a major correction in how we view DNA photostability.”
Until now, the proposed dark energy state of DNA was a little like the dark matter in the universe – there was no direct way of probing it. The Ohio State chemists used a technique called transient absorption, which is based on the idea that molecules absorb light at specific wavelengths, and allows them to study events happening in less than a picosecond.
They found that DNA dissipates UV energy through the dark state 10-50 percent of the time, depending on which DNA base is excited, and whether a sugar molecule is attached to the base or not.
Next, Kohler’s lab is investigating whether the dark state can be linked to DNA damage.
“What are the photochemical consequences of long-lived states? Are they precursors to some of the chemical photoproducts that we know cause damage? That’s the Holy Grail in this field — connecting our growing knowledge of the electronic states of DNA with the photoproducts that damage it,” he said.
Kohler’s coauthors include Carlos E. Crespo-Hernandez, a former postdoctoral researcher at Ohio State, and Patrick M. Hare, who just obtained his Ph.D. from the university and is about to begin a position as a postdoctoral researcher at the University of Notre Dame.
Note: This story has been adapted from a news release issued by Ohio State University.
As for being someone’s puppet, I recommend you all visit American Chronicle, January 10, 2007 edition where you will find that Kazmer Ujvarosy has introduced my brief essay – “The Darwinian Delusion” as an antidote to Richard Dawkins’ delusionary – “The God Delusion.”
Read, enjoy and repent. It is later than you think.
I love it so!
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
I’m not a Richard Dawkins fan, but I’ll at least give the guy credit for being a decent scientist, and a smart honest advocate for his position. I may not agree with Dawkins arguments, but I’ll admit that he’s capable of putting together an argument, presenting it in an engaging passionate way, and debating it with people who he disagrees with in a civilized fashion.
On the other hand, you are, by your own demonstrated behavior here, an arrogant ignorant buffoon whose idea of an intellectual argument is ignoring anything anyone else has to say, shouting really loud, calling other people names, and proclaiming victory.
After watching your behavior here, why would anyone take an article by you seriously? Particularly a article introduced by a self-promoting shnook like Ujvarosy?
John:
Also, let me point out that you have posted at least 20 comments over 17 visits to this blog, and in all of those, you have yet to actually respond to a single point, or to answer a single question raised by anyone here.
I didn’t come here to be interrogated by a bunch of Darwimps. I came to expose you all as the atheist “prescribed” lightweights that most certainly you all are. The place to criticize my work is in journals not in ephemeral idiotic “groupthnk” blogs like this one. No one has so far and I know why. They can’t.
I love it so!
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
The place to criticize my work is in journals
How so? The work itself wasn’t in a journal, it was in Rivista di Biologia.
I didn’t come here to be interrogated by a bunch of Darwimps. I came to expose you all as the atheist “prescribed” lightweights that most certainly you all are.
And you’ve done a bang-up job. Your flawless analyses of copious empirical data have exposed us as feckless charlatans. Your work here is done.
John:
You came to this blog close to 20 different times not to participate in any real discussions, but just to call people names?
And if your goal was to “expose” me as an atheist, then you’ve failed miserably. I was a religious Jew before you showed up, and I’m still a religious Jew.
And given that the most recent thing that could be called a journal paper was published 20 years ago, (and even that was not really a journal paper, but a letter to the editor! If I put a letter to the editor as a journal paper on my publication list for my yearly review at work, I’d be called on the carpet for lying!) how is anyone supposed to respond to it? Do you think that real scientific journals accept criticisms of 20 year old letters to the editor?
Does this mean that you have decided to ban me? That is the usual response I evoke when I have exposed “prescribed” bigotry.
No, banning is the response you provoke for being a troll and a fool.
Contradiction. You can’t “expose” us if you won’t answer our questions.
All you’ve been doing is ignoring us, and pretending that those questions and criticisms will only valid if written in ink. Sorry, but pigment on wood pulp does not define reality, and criticism is for just about everywhere.
Also, considering the savaging you got here, I don’t think you’d make it in a real journal: When you’re having an argument, you don’t ask people to write something down, mail it to an alleged peer review journal, and hope it gets published: You argue with them in the here and now. You don’t retreat to safer ground where most of the arguments won’t get through.
Oh, and what’s that about “a present evolution” being “undemonstrable?” Try reading the science and technology section of the newspaper sometime. It’s been done countless times, and I intend to do some of my own on my under-utilized desktop computer, sometime. (Been having trouble running Breve’s Creatures: Don’t know how to get the program started.)
John Davison wrote: “I didn’t come here to be interrogated by a bunch of Darwimps. I came to expose you all as the atheist “prescribed” lightweights that most certainly you all are.”
Apparently we aren’t all atheists here. I don’t consider myself one, either. I’m a mystic with an understanding of “God” that transcends simplistic literality.
From your writings, I’ve discovered that you consider yourself a Catholic, but you believe that God is dead (a blatant contradiction). And, you claim to be discussing only the science, yet you accuse us of atheism!
Try being a bit more consistent. I’m an artist, not a scientist, and yet I think even I display more consistency than you do.
John said, “Does this mean that you have decided to ban me? That is the usual response I evoke when I have exposed “prescribed” bigotry.”
The reason you couldn’t be banned for this is that you have not yet ‘exposed’ any bigotry.
You have not, for example, demonstrated that we are all atheists. You have not, for example, demonstrated that we are all lightweights. You have not, for example, demonstrated that we are bigots.
In order to demonstrate any of those points, you would need to correctly identify such beliefs or behavior on the part of the various posters; I cannot find in your posts anywhere you have done this. Where have you demonstrated that we are all bigots, for instance?
Earlier I said,
“And I note that Sal, despite his bravado on Ed’s blog, has yet to show up here.
This is fairly typical Sal behavior – whenever a conversation gets uncomfortable (i.e. he is shown to be an idiot) he bails.
ATBC has a link to a simply marvelous thread on http://www.kfcs.org on mantle plumes in which Sal was utterly demolished by Joe Meert, and then fled the thread when it was pointed out that he had grossly misunderstood Walt Brown’s book.
The interesting part of that particular thread was Sal’s confession that he rejects the work and teaching of actual experts in a given field (e.g. Joe Meert) because they make him look like a fool. I thought it was the most telling analysis of Sal’s basic style I’ve come across.
I expect much the same will happen here: Sal will display a misuse of standard terminology; various attempts to bolster his fallacious position by article cites that don’t actually support his point; followed finally by an abandonment of the thread when he feels that we’ve made a sufficient fool of him.
I’ve got the wasabi-peanuts and my comfy pillow. I’m ready.”
I stand by my points: the vague definition or misuse of “channel”; misuse of Yockey’s work; and his hit-and-run performance of this morning.
Well, Sal?
John:
Who said anything about banning you? I wouldn’t give you the
satisfaction! If I’d had any attention of banning you, I would have done it at the same time that I started editing the insults out of your comments.
I will merely point out that I’ve given you an open forum, where I’ve politely asked you questions over and over; and that you apparently consider this site being worth enough of your time to return here to comment on 20 different occasions over nearly a week; but that despite both the open forum and the numerous returns and comments, you have yet to address a single question that anyone has put to you.
What I conclude from this is that either (a) you have nothing to say, or (b) you place a very low value
on your time and energy.
Does anybody else detect a whiff of a martyr complex here? Somebody sounds just a bit too eager to put another notch in the “Censored by The Man” belt.
I’ve run across a few such martyr-type personalities on my own forum, and yet John Davison is the most obvious example I’ve seen yet. He brags about his multiple bannings in many of his posts.
Scarlet:
Even more interesting is Sal’s later spin on the experience. Just a few weeks ago he said:
Whatever else you can say about Sal, you have to admit that his tall tales can be very entertaining.
Mark C. Chu-Carroll:
I’d be fascinated to read of how you personally manage and moderate the relationship between your religious beliefs and your rationality.
Well Mark, I have to apologise for feeding the troll above. I would have thought even Davison could have responded to a simple clarification on his position, but apparently not.
It’s also clear that he really doesn’t understand the internet. For those who’d like to read his “new” essay, it’s here: http://www.americanchronicle.com/articles/viewArticle.asp?articleID=18813
“The Darwinians have traditionally pretended that they had no critics.”
This is very funny. Evolutionary biology developed *from the criticism*. The major weaknesses in The Origin were that there was no mechanism of heritability clearly understood, and that “blending inheritance” was hard to square with innovation. Then Mendelian genetics were (re)discovered, and in the early 1900s some of the major objections (and predictions) of evolutionary biology were resolved in the “neodarwinian synthesis”. Since then, work has shown how initially neutral gene duplication allows genetic drift in some parts of the genome that can sometimes be readopted. The evidence for this is littered through the genes, and through morphology.
There has historically been vast criticism within and without evolutionary biology, and the reason it sometimes appears that biologists take no notice of criticism is that those critics have been answered repeatedly. That they choose not to address the answers and keep repeating the criticisms 50 or 100 or 150 years after the question is settled in the literature *without any new evidence to show how their point is new* is their problem, not ours.
Again, all Davison’s saying is “Darwinism doesn’t work, because I say so”, and quoting 50 year old science that was dubious even then. To pitch this comment a little nearer the maths focus of the hosting blog, yet again there’s a reference to random chance. Saying “Darwinism” is “random chance” is like saying winning a Backgammon tournament is blind luck. Yes there’s an element of chance in real life, both in day to day existence and in the genetic reassortment during meiosis, but on average over many trials the best combinations win out. An individual’s personal chance of cancer is impossible to predict, a population’s group probabilities easy (see discussion elsewhere on SB on this). Likewise, a single game of Backgammon is unpredictable (hence the heavy betting element in serious play) but over a long series of games, the more skillful player will win.
Missing the stocastic element of evolutionary theory by saying that “Darwinism is random chance” is a junior school level error, and one that is only promoted seriously by self-serving evolution-deniers, and the undereducated or wilfully misinformed commentator or pundit.
This little episode gives life to the principle that science progresses by the gradual dying off of old professors… It’s evolutionary, really.
Expect Sal to scurry back to the safe confines of his friends’ blogs and narrate tall tales about how he had a ‘mathematician’ sweating bullets and how he is forever the loyal factotum of the Newton of Information Science!
Shiva, he appears to have moved on to derailing, by quotemining Motoo Kimura, a thread on Panda’s Thumb about how UD censors posts.
Charie B.
Thanks for the plug on the American Chronicle essay – “The Darwinian Delusion.” I forwarded that link to Dickie Dawkins. Do you suppose he will respond?
(MarkCC: vicious pointless attack on Dawkins deleted.)
“If you tell the truth you can be certain, sooner or later, to be found out.”
Oscar Wilde
I love it so!
“A past evoluton is undeniable, a present evolution undemonstrable.”
John A. Davison
(MarkCC: pointless sneering and insults deleted.)
It is hard to believe isn’t it?
I love it so!
“A past evolution is undeniable, a present evolution undemonstrable.”
John A. Davison
Kind of feels like being with a 9/11 conspiracy nut: “Oh, we’ve exposed them! We’ve exposed them, we’ve exposed them, we’ve exposed them!”
Of course, they have done precisely zero.
Oh, and John: What’s this obsession with us personally publishing stuff? This isn’t some mamby-pamby newage world where reality is altered by who makes an argument, so stop making a fool of yourself by pretending that ink on paper magically changes the truth of a statement or is required to discuss your logical fallacies.
If anyone’s been exposed it’s you: People like me noticed that you went along with that “randomness” fetish any middle schooler should know is a lie.
Doggerel used: #11, #51.
Try using arguments that are actually relevant for once. All you’ve been doing is the Chewbacca defense: Red herrings and non-sequiturs.
MarkCC,
My advice is to ban JAD. He does not appear to be a well person, much less rational, and giving him a forum such as this only enables his illness.
I know you don’t like the idea of banning anyone, but I think, for his own sake, he should go.
I think doctorgoo has a point. An open-door policy doesn’t compel Mark to play host to psychotics. JAD needs professional help.
You clowns don’t need to ban me. All you need to do is what you are doing which is to delete my comments. You talk about cowardice with a capital C.
(MarkCC Pointless offensive attack deleted.)
SOCKITTOME!
I love it so!
Let’s see how long this stands.
Ooooooh, how sweet it is!
Jackie Gleason
“You can’t make chicken salad out of chicken droppings.”
anonymous
“A past evolution is undeniable, a present evolution is undemonstrable.”
John A. Davison
John:
You want to declare victory because I’m deleting suicide jokes from your comments, you go right ahead. I will not tolerate that. Given my family history, suicide jokes are not something that I take lightly, and not something that I will tolerate on my blog.
The real coward is the person who refuses to engage in an actual discussion where he’d have to defend his ideas, but instead just shouts insults. Does that description ring any bells?
He’s gone the way of WoMI. Ignores what we say. Suicide “jokes.” What’s next? Gay jokes? “Girly Men?” Insulting our website traffic without bothering to look at it?
Takin’ all bets.
JAD:
Actually, you can. It just wouldn’t taste too good.
As for my comment on Jan 9 @ 9:02am on this blog entry, I take it back… your blogs are NOT funny. After reading through the comments (about 600 of the ~650 are yours), I find it to be quite disturbing.
John, you remind me too much of the Farfarman incident from Ed’s blog several months ago, where he apparently had some sort of mental breakdown and made a spectacle of himself.
I sincerely hope you don’t do the same (suicide jokes are NOT funny), and I wish you good mental health, John.
Bronze Dog wrote: “Kind of feels like being with a 9/11 conspiracy nut: ‘Oh, we’ve exposed them! We’ve exposed them, we’ve exposed them, we’ve exposed them!'”
I submit that any belief in the evidence for US goverment complicity in the 9/11 crime at some possible level is a matter or degree, and with that in mind I must point out that you haven’t clearly delineated where rational speculation ends and the realm of the “conspiracy nut” begins.
Davison doesn’t even warrant that designation. Imo, John is merely a typical neurotic acting out of his fear of death.
[Derail]
So far, all the 9/11 conspiracy people I’ve encountered fall well into the tin foil nuttery category. The least nutty ones don’t bat an eye when I point out the illions of man hours necessary to stealthily plant hushaboom bombs, combined with the high risk and liability of such a venture. Then you’ve got the far end of the nutty spectrum, like orbital R-9 wave cannons and so forth.
In short, the only 9/11 conspiracies I’m aware of worth considering for even a moment are attempts to cover up incompetence, which are probably small and mostly independent of each other.
[/Derail]
This isn’t the place (it’s certainly not the proper thread) to get into a detailed analysis of the huge amount of debate available re the 9/11 crime, nor am I an expert of any kind and I’d only be posting links to controversial websites, so it will have to suffice for me to say that in my opinion it’s not as cut and dried as you apparently believe it is.
Guys, the 9/11 conspiracy pales in comparison to the anti-string-theory conspiracy. You really think it’s an accident that men can publish polemical books whose “arguments” break down into repeating problems all physicists knew about already, misrepresenting or ignoring active areas of research, making grandiose and debatable claims about the way science should be done, and firing volleys of character assassination? Can you call all that coincidence? Accident? Misunderstanding? No, someone is at work here. Someone with an agenda. The stakes in this game were nothing less than understanding the fundamental natural laws of the Cosmos, and someone wanted to play the game beyond the public eye.
What would you do if you figured out string theory and discovered that the elegant construction of orbifolds and Dirichlet branes required to make it work led to new modalities of physical ability? We’re talking antigravity, wormholes, quantum singularities made on the lab bench. . . . What would you do if you had a chance at that kind of power?
I think someone figured it out. They’re at work right now, turning it into a technology which will change the course of human history, but they covet secrecy. How can you hide a scientific discovery which you probably made by accident and which any grad student might repeat? You have to deflect interest: convince the funding agencies that it’s all hot air, trick other scientists into believing that the whole theory is worthless, and most of all, lead string theorists down the wrong paths. Make them work harder in the directions you know are pointless in a vain attempt to defuse your critique!
Oh yes, it all sounds so easy. . . .
Well, since you haven’t gone into specifics I can’t reply in specifics, but some of us have seen most of those websites already, and suffice it to say it is as cut and dried as that, and you’ve been suckered by crank websites that use slick presentation (and appeal to your sense of fairness and wanting to give people the benefit of the doubt) to mask the problem that their factual basis is dishonest and/or misinformed.
I beg to differ, and a little connecting the dots in existing peer-reviewed literature would actually support a degree of almost willful blindness to the problems.
The issue is not that hard to grasp from even a superficial level. If it turns out a good portion of the 3.5 giga base pairs of DNA are either function or evidence a high level of linguistic integration, it is at least a crude measure in terms of bits. A single DNA nucleotide position has four possible outcomes, thus, it can be roughly affixed a shannon capacity of 4 bits. Now, how fast can Natural selection fix in new functional nucleotides? I have pointed to Haldane’s Dilemma and the U-Paradox for starters and pointed to the severe evolutionary speed limit issues. Under generous assumptions a fixation rate of 1 per 300 generations of a novel/useful nucleotide. Too slow. On top of that, the error correction capcities of natural selection have been shown to be theoretically suspect. Hard to evolve a functional information system when functional bits keep getting erased and never corrected. Thus we have a very large amount of information and an insufficient channel capacity from the environment to infuse that information.
The deterioration of the genome is empirically testable. I point you to the U-Paradox thread at UD and the comments I also made in response to the readers. Other problems for Human Evolution, Nachman’s U-Paradox. We all know, if there is not sufficient error correction, Shannon has shown the channel capacity is compromised. And at some point the channel is ineffective. The U-Paradox may demonstrate we have a net information flow of 0-bits of functional information. This is empirically testable. Are any of the evolutionary biologist boasting that the human race is gaining a net amount of novel functioning bits each generation?
The numbers I gave are crude, but sufficient to give one serious pause. Of course, one solution to the channel capacity problem is then to argue that what is stored is not designed or functional, but merely junk. If true, that would alleviate the channel capcity problems somewhat, but that position is increasingly indefensible. See: DNA researcher, Andras Pellionisz gives favorable review to a shredding of Dawkins and TalkOrigins.
I hope I haven’t prejudiced you from Pellionisz work, because it’s some of the most impressive biology I’ve ever seen, and he seems not to have an axe to grind with ID proponents. In fact, Pellionisz seems to have some common interest with ID proponents in the research of JunkDNA….
You made the assertion about “channel capacity”, but without bothering to define the channel in a mathematical way! How is anyone supposed to refute that mathematically? It’s just another round of the same old nonsense: you don’t define your terms precisely, so then you can wave off any critique by saying that it’s using the wrong definition.
It’s the same
Well it is unfortunate we appear to be on opposite sides of the ID debate as I do recognize and salute your achievements in your field. I think you and Mark would actually find a field of study which even ID propenents find fascinating in the computational biology of Andras Pellionisz. Please visit: http://www.junkdna.com
His scientific research could use the support of people like you and Mark. However, I’ll be up front and say, there are elements in the evolutionary community that are institutionally biased against his findings.
I would hope, however, this fact will not deter you from publicly encouraging this form of “ID-friendly” research.
Typo! 2 bits, 4 elemenst of uncertainty -log2(1/4)
Mark pointed out to Sal,
“You made the assertion about “channel capacity”, but without bothering to define the channel in a mathematical way! How is anyone supposed to refute that mathematically? It’s just another round of the same old nonsense: you don’t define your terms precisely, so then you can wave off any critique by saying that it’s using the wrong definition.”
Sal’s response?
“It’s the same”
I am glad that you admit that we are dealing with the same old nonsense, but I puzzled that you would choose to so publicly admit that you have no case. Why is this? You rarely display this level of honesty.
Sal, I also note that you have STILL failed to define the channel. What is your difficulty?
This appears to be similar to your posts on the PT, where, when asked for your definition of ‘natural selection’ failed completely to even address the question.
You continue to embarrass yourself and the ID movement with this failure to actually come to grips with the vacuity of your arguments.
Please try again; and this time actually say something meaningful.
Coin:
I am an “expert of any kind” (Physics to be precise). And in my area of expertise the “mainstream” analysis is quite good and the 9/11 “truthers”, when they try to do physics at all, bungle things badly. I even re-did one of the minor calculations that some of the truthers were disputing the conclusion of. (There was enough gravitational potential energy in the towers to pulverize the concrete in them.) In fact, quite a number of the claims are so stupid that a layman’s expertise is adequate to see where they go wrong. That doesn’t inspire confidence in the rest of their claims.
Now there is, of course, a large grey area between the halfway reasonable conspiracy theories, and the just plain stupid ones. But in my experience on the English speaking internet, that middle ground is uninhabited. (Strictly speaking, the official story is also a conspiracy theory. There’s nothing per se wrong with conspiracy theories; conspiracies do occur and are sometimes successful. But the phrase is used as a shorthand for “the usual tinfoil hat sillyness” rather than for its literal meaning.)
RE: U-Paradox
Hmm. For starters, I’d assume that one problem with your analysis is that is *seems* that you’re saying that the only type of mutation is a point-mutation. That is, if I have 3 deleterious mutations added to my child’s genome, I’ll need to add enough additional children to the mix that these mutations are wiped out in a later generation by another point mutation.
There are, of course, many more types of mutations than just that, which transplant entire sections of the genome at a time. That can make it much easier to wipe out an entire swatch of errors at one time.
As well, there are many, many miscarriages within an average woman’s life – fetuses automaticaly aborted within the very first stages of pregnancy when it turns out that the developing clump of cells isn’t viable due to a bad mutation.
Looking over the paper, I also note that it explicitly talks about the higher-than-normal mutation rate on the Y chromosome. I haven’t read this in any detail (nor am I more than an amateur biologist), but I do recall that there are a number of factors that aid in purging mutations from the Y chromosome, though I cannot bring them to mind at this moment. I believe the discussion on this topic was in a Pharyngula post, but I don’t have time to search for it at the moment.
Other may back me up or correct me on points I have discussed as appropriate.
Care to reconcile this with genetic algorithms? They employ a number of types of selection, which can be quite noisy indeed (sometimes the noise is welcomed as fuel!) That brings me to another point – that a gene that was ‘noise’ at one point can easily be co-opted into actually doing something useful. There should be many examples of this in the literature; it is also experimentally verifiable in a short period of time with genetic algorithms. Often, the starting ‘organisms’ are randomly generated – their entire ‘genome’ is noise, and useless for the task they are set. Over generations, though, that noise is put to use, used as fuel for selection to operate on, and eventually very useful things can come out of it. I’ve done this myself with a program I’m evolving to play 3D Tic-Tac-Toe, modelled after the checkers-playing program ‘Anaconda’, an excellent (and fun!) example of the strength of GAs on complex problems.
Salvador,
And I have responded. In short, sexual reproduction removes the speed limit you appear to be thinking of. For details, see my previous post:
http://scienceblogs.com/dispatches/2007/01/answering_cordova_on_ids_goals.php#comment-306155
Or better yet, just google Haldane’s dilemma. I’m sure others have done a better job analyzing it that I have as I am not an evolutionary biologist.
Speaking of which, Torbjörn Larsson also responded and referred you and I over to a relevant talkorigins page on that thread:
http://scienceblogs.com/dispatches/2007/01/answering_cordova_on_ids_goals.php#comment-306265
Xanthir, FCD
I don’t know if it’s strictly speaking a mutation, but recombination can do this. Except for the Y chromosome.
Genetic algorithms tend also to be small; if they were larger too high a mutation rate could be a problem. (I think it is mutations that Sal is going on about rather than a noisy fitness function). However, IMO, Sal overestimates the “channel capacity” required for correction. I don’t think he’s accounting for the effects of sexual reproduction correctly (or at all.). But until he actually presents his argument, it’s hard to say.
Wow, JAD, you really don’t bother to read much at all of others’ posts, do you?
Fine. I won’t bother reading yours any more, either.
Sal:
(1) That is not a Shannon definition of channel capacity. That’s a bunch of random handwaving. Do you even know how Shannon defines channel capacity, or how its computed?
(2) You do a great job of demonstrating exactly why Shannon is not appropriate for this purpose. Shannon theory is based on the idea of a fixed message travelling over a fixed channel. Your explanation of the channel essentially sees the genome of a species as a single message being transferred down a channel. But that’s not how it works. The genome is millions of copies, all of them changing slowly, and being dynamically mixed and recombined. If you were to take your babble, and actually turn it into a mathematical definition of a channel, you still wouldn’t have an accurate model – because you’re basing your assumptions on a single message, ignoring copying and recombination, ignoring the fact that there’s numerous individuals and variation between the individuals, ignoring the fact that changes are occurring simultaneously.
(3) You’ll pardon me if I refuse to take a discussion at UD seriously, given that UD has a long history of banning anyone who so much as questions the party line. What kind of real scientific discussion can you have when anyone who disagrees is immediately banned from the forum?
John:
You want to go off and brag about how you’ve been banned from another site, you go right ahead and do it. I am not going to tolerate this bullshit any more. I’ve tried politely asking you questions; politely warning you about attacking people; not-so-politely warning you; editing the insults out of your comments – and all it’s accomplished is to feed your ego. So fine – congratulation, you’re the first person who I’ve banned. Go away, and brag someplace else about how I banned you.
Sal:
Neither Haldane’s dilemma nor Nachmann’s U-paradox nor error correction are limiting evolution in real populations. Discussing junk DNA is irrelevant – that is non-functional DNA. I will go on and discuss some of your points below, others (U-paradox) is already cared for in other comments.
And you still need to come back with specific “numbers or calculations for your argument”.
It is interesting to see that ID opens up for ‘Bible code’ numerologists. I note that in your discussion with Pellionisz you mention the ‘hidden messages’ that those nuts imagines.
“Many in the ID community are intensely interested in uncovering hidden language and codes in DNA and other structures within the cell.” ( http://www.uncommondescent.com/archives/1941#comment-85041 )
Haldane’s/Remine’s dilemma is not a limit for evolutionary changes in real populations. Haldane’s model doesn’t cover all effects, limits for populations or non-independent fitness effects. It turns out that there is a long list of phenomena that increases change in real populations; soft selection, intraspecific competition, bottlenecks, non-independent fitness effects, and possibly neutral drift and introgression.
For a recent discussion with lucid analyzes of some of these mechanisms, see http://www.pandasthumb.org/archives/2007/01/dissent_out_of.html .
In short, population models predict that much faster change rates are possible in normal or small populations. And some of those effects have been observed as predicted.
This is an argumentum ad ignorantiam, instantiated as an ID ‘law of information conservation’. But in reality there is no observation that genes aren’t developed or made non-functional again (pseudogenes), nor that genomes are accumulating defects over a long time.
In fact, it has been noted that some population models look like bayesian inference used in machine learning. Alleles acts as ‘hypotheses’ and the population as ‘theory’. The original evolutionary prediction that species has a certain phenotypic characterization that changes over time is supported by the newer prediction that species has a certain genotypic characterization that changes over time.
I quote myself from PT:
“I wondered how long it would take for Pellionisz and ID to meet up, since science blogs have been negative against his PR comments. The negativity being that he misrepresents junk DNA and ‘postgene’ diseases, and have not presented any results from tests of his theory. (Reminds one of something, doesn’t it? 😉
He makes such mistakes as including regulatory regions in junk DNA. He also presents any (ie all) diseases whose progress is affected by variation in DNA as a ‘postgene’ disease.
For example, he mentions that his co-author has multiple myeloma and calls it ‘postgene’. In fact, 50 % of cases are observed to have an immunoglobulin chromosomal abnormality. Where is the junk here? Not in the DNA, but in Pellionisz classification.”
It is not surprising that woo businesses combines. Two woo or not to woo? Woo-woo!”
Andrew:
Thank you for the reference. However, it turns out that my original objection to ReMine was ignorant, since Haldane’s model includes setting a limit on fixation even on drifted genes becoming selective.
The link I gave Sal above has knowledgeable commenters explaining some of the other mechanisms that a real population faces, and why Haldane’s model and especially used in the “dilemma” is inadequate. They don’t mention sexual reproduction explicitly, but it is included in some of those mechanisms.
Classic troll behavior: Be obnoxious while actively evading the real discussion (including claiming that computer-generated letters are somehow not an arguing tool while inked letters are), and then when you get banned for being obnoxious and obfuscatory, claim persecution because the eeeee-ville skeptics couldn’t deal with the arguments never presented and/or repeatedly crushed.
After watching John’s antics on the internets for a year or two, I think it’s clear that as a failed scholar, the identity John has now taken on for himself in his golden years is as a pseudo-martyr. He repeatedly hops from website to website, pro- or anti-evolution, and deliberately works to get himself banned from every one. If you don’t ban him right away, he’ll either abuse you until you do, or go around repeatedly claiming you have banned him even if you haven’t (ask Alan Fox). Needless to say, this becomes a self-fulfilling prophesy. He DEMANDS that people ban him.
Unfortunately, John’s career and prestige has tanked so badly, the only identity he can now cobble together is ‘the scholar who is so radical, no one can handle how brilliant he is, so everyone bans him’. Needless to say, this gets him off the hook for ever having to publish anything, produce anything, or ever successfully defend his ideas, or any of the benchmarks of a competent accomplished scholar, so it’s way easier, as well.
It must help that he seems to have quite a masochistic streak along with his Tourette’s Syndrome, anyway.
I’d just like to point out a recent article dealing with the usefulness of evolutionary algorithms to solve problems not related to biology. The latest comes from the field of crystallography.
http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=JCPSA6000124000024244704000001
Another. There are many examples
http://www.rsc.org/Publishing/Journals/CC/article.asp?doi=b609138e
And i hope you don’t mind my quoting the first bit of the paper:
Evolutionary algorithms are being increasingly used to solve a variety of global optimization problems in chemistry, nanoscience and bioinformatics. These powerful techniques are inspired by natural evolutionary processes, and mimic the principles of biological evolution and survival of the fittest to explore parameter space. However, the biological evolution of natural systems can be a slow process, especially compared to the rate of cultural evolution in a society when adapting to changing social environment. Cultural algorithms have been developed to model behaviour based on the principles of human social evolution, and can be used to bias the search process by passing experience and knowledge of behavioural traits of a population from one generation to the next. In simple terms, this cultural information can be used to reduce the search space of a standard biological evolutionary algorithm, improving both performance and efficiency of the global optimization process. In this paper, we report modification of the Differential Evolution (DE) global optimization algorithm, by incorporation of the concept of Cultural Evolution, with the aim of increasing the efficiency of DE when applied to crystal structure solution from powder diffraction data.
Although DE is a relatively new evolutionary algorithm, it has proved highly effective in a range of chemical contexts, including X-ray scattering, crystal growth epitaxy, optimization of clusters, protein crystallography, molecular docking, disordered crystal structures and the direct-space crystal structure solution of organic molecules from powder diffraction data. The direct-space approach to structure solution involves generation of trial crystal structures in real space, by placing a structural model of the molecule inside the unit cell, independent of the diffraction data. A calculated powder diffraction profile is then compared to the experimental pattern to assess the fitness of each structure. Global optimization techniques, such as Monte Carlo, simulated annealing or evolutionary algorithms11, are used to find the minimum point on the fitness landscape (or hypersurface), corresponding to the correct crystal structure.
The Meccano of life
In Martyn Amos’s Genesis Machines, Steven Poole discovers how to turn some DNA into 50 billion smiley faces
Saturday January 6, 2007
The Guardian
Genesis Machines: The New Science of Biocomputing
by Martyn Amos
353pp, Atlantic, £18.99
“… The book also shows how hard it is to get away from the constant application of engineering metaphors, such as Amos’s claim that genes are ‘computing components’, and anthropomorphic language, as when it is said that ribosome works by ‘interpreting mRNA messages’. This is manna to creationists, who insist that where there is a computer, there must be someone who designed it. Amos skips lightly over such philosophical problems, but it is a serious question whether appeals to ‘self-organisation’ or ‘information’ are themselves in some sense metaphysical, even if one rejects the epistemological nihilism of ‘Intelligent Design’. Understandably, however, Amos is more of the pragmatic scientist’s persuasion: sure, there’s a mystery here – so let’s tinker around and try to solve it. His lucid and punchy prose conveys a genuine excitement of the frontier. It is even possible that, when the footnote numbering goes crazy on pages 199-201, it is some sort of joke about genetic mutation. Sadly, I was not able to find meaning in the resulting number series….”
I note, merely for the sake of completeness, that I was right.
Sal made some stupid, ill-informed remarks.
He was called on them.
He responded with more nonsensical remarks.
When called on those, he ran away from the thread.
Utterly typical behavior on the part of Sal.
Jonathan Vos Post wrote, “This is manna to creationists, who insist that where there is a computer, there must be someone who designed it.”
Correct. Seems to me that the ID logic – the little that can actually be found – is fundamentally based on the notion that something can’t come from nothing, and secondarily on the notion that life is too complex to have arisen as quickly as it has without an intelligent guidance. Both notions are filled with unprovable definitional assumptions and pitfalls. The former would be sound only if our universe did indeed have a beginning and was the only one (both of which assumptions can’t be proven), and the latter would be sound only if we knew the former’s assumptions were conclusively true and if we knew for certain not only all the mechanisms of the cell but also if we knew that theories such as panspermia aren’t valid … and we DON’T know such things.
As I’ve already pointed out – and no one has really responded to it – any possible “intelligence” creating or guiding the universe or life may not be anthorpomorphic at all but may instead be due to a natural aspect or natural aspects of unknown variables of existence (multiple universes, infinite size or time, etc.
You presume I don’t or never have, eh?
You’re apparently unwilling to see the significant parallels of nucleotide fixation per generation and fixation bits of information per generation. There is a significant channel capacity problem here. How many bits of information can be infused (fixed) into a population is analogous to how many DNA molecules can be fixed into a population via natural selection. Apparently there is some willful closing of ones eyes.
Added to that problem is the fact information is generated by noise (random mutation). This constricts the channel capcity by Darwinian mechanism even more severly if not effective destroying it in real world scenarios. Many human-made GA’s do not model the destructive effects of mutation on already “selected” solutions, but as we see, in real biology, exisiting forms are constantly subjected to mutation. I pointed out real life has a problem with purifying selection. These mutations are sanitized away in most human-made GA simulaitons.
Anyway, I’ll have to get around to responding to your inaapropriate analogies of human mand Genetic algorithms to real biology….
But first, I point out your equivocations in the sense of the word “necessary” for Turing-equivalent machines of life.
One can say, metabolism is a necessary condition of life, metabolism is not a sufficient condition for life. A Turing-equivalent computing system is a necessary condition for life as we know it, it can in no wise be a sufficient condition. You have it totally backward. If you take your backward claim to its logical consequence, you’ll be claiming a Dell computer is a life form!
You’ve equivocated the sense of my usage of the word “necessary”.
Salvador
For the formalists out there, there is the channel capacity as limited by the signal-to-noise ratio (from Shannon’s classic paper), but then there is also the channel capacity as limited by the mechanical properites of the devices invovled. For example, I can use a laser beam over optical wire like a simple telegraph where I telegraph a message to a receiver over morse code. The channel capacity from a mechanical sense is about 2 bits a second (about 1 alphabetic letter every two seconds) from mechanical properties of the transmitter, not necessarily because the SNR is poor. The SNR would permit substantially more bits per second if the mechanical transmitter were capable.
In the evolutionary sense, there is a mechanical limitation defined by the population resources and basic population genetics. A more charitable reading by Mark would have made that connection.
From what we know of DNA computing, the channel capacity of DNA string from an SNR standpoint is respectable, but that was not the sense of channel capacity that I was discussing….
I don’t really understand what you guys are discussing; could anyone please tell a layman like me in simple language how this is pertinent to the real esscence of the ID question? Sum it up in a few sentences?
And please forgive my own impertinence.
Sal:
Up to the same tricks as usual – obfuscation and bullshit to avoid actually answering a question.
By golly, you actually got something right!
No, I don’t believe that you actually know or understand how Shannon defines channel capacity, or how it’s computed. And I think that your response is largely an attempt to throw lots of words around without actually saying anything, in order to cover up the fact that you’re making a claim that you cannot support.
You’re claiming that the mathematics of Shannon’s information theory can demonstrate that the capacity of
DNA as an information carrier for the evolutionary process is inadequate.
Fine. You want to make that claim, I have no problem with it. But to actually make that claim in terms of Shannon’s theory, which is what you claim to be doing, you need to show three things.
(1) A valid mathematical definition of the information channel of the evolutionary process.
(2) A valid Shannon computation of the capacity of that channel.
(3) A counterexample showing that the information that
would need to be transmitted down the channel is larger that the capacity computed in (2).
You’re not providing that. You’re just repeating the same words, which amount to repeating the assertion that you can define the information channel of an evolutionary process, and that it doesn’t have sufficient capacity.
Fine. Show me.
I believe that you cannot even do the first step. I don’t think that you can do a meaningful definition of the evolutionary information channel in terms of Shannon theory.
Go ahead, and prove me wrong! I’d actually love to be proved wrong about this – I would love to see a definition of evolutionary information transfer in terms of Shannon theory – it would be fascinating!
Yes, Sal is a real wash-out as a debater on this topic. He has become the new Warren Bergson – perpetually making assertions that fail to be backed by any hard numbers.
This is science, Sal. And mathematics. If you can’t show your work, then you get no credit; and you are COMPLETELY unable to actually demonstrate any of the math behind your assertions.
I suspect you learned this from the master obfuscator himself: Bill Dembski, who has yet – after all these years – to provide a single actual calculation of CSI. Going on a decade now, isn’t it? And no actual math to back up his assertions.
And of course THIS nonsense: “One can say, metabolism is a necessary condition of life, metabolism is not a sufficient condition for life. A Turing-equivalent computing system is a necessary condition for life as we know it, it can in no wise be a sufficient condition.”
is utterly false. Since you’ve failed to define ‘life’, you’re just playing games with another set of equivocations.
Note that YOU are the one spending his time equivocating; you are the one ducking and dodging answers; you are the one who is not even capable of defining natural selection (your attempt at a discussion of this point with Febble was most instructive.)
Once upon a time you could at least make an effort at holding a rational discussion. What happened?
I also would love to see a definition of evolutionary information transfer in terms of Shannon theory.
I’ve already admitted that my PhD research does not easily extend to this.
Here are a few ideas that might hint at a plausible definition.
(1) Consider the set of different kinds of mutation available: point mutation, inversion, crossover, frame shift, chromosomal duplication, and so forth. It has been pointed out in this thread that mutations other than point mutations are important to this calculation.
(2) Consider the statistics of how often each type of mutation occurs (complication — this differs from one clade to another, and between organisms; it also differs greatly between different genes within some organisms, i.e. for hypervariable genes).
(3) The basic idea struggling to break free from Sal’s verbage is that we are treating a channel that leads in one generation from the genome of all the organisms in that generation to the genome of all the organisms a generation later, which is awkward given the difference in lifespan, reproductive ages, and reproduction rate from one organism t another.
(4) The Shannon approach is to consider the statistical ensemble of possible organisms a generation later, and to calculate a mutual information between source (generation x) and receiver (generation x+1). Of the possible organisms, given the probability of each within the total ensemble, how much information is needed to describe which ones from the ensemble of possibles actually were conceived (or born, which introduces the problem of dealing with embryos of various stages).
(5) One must refine the definition of “possible” organisms in next generation. Not any organism is possible. Only those that can be reached in one generation of the set of available mutations from the previous generation. Point mutations, if they don’t create or destroy regulatory genes, can change one codon to another, which sometimes changes one amino acid to another at that place in the expressed protein. Stuart Kauffman’s writing and mathematical models deal in depth with what he calls “The Adjacent Possible” — which is, loosely speaking, the set of proteins “next to” a starting set of proteins, namely close to it according to a metric. It is not just a Hamming distance metric, but more related to an edit distance. It takes some effort to go from a definition of adjacent possible gene to adjacent possible phenotype.
(6) “Adjacent” is different when looking at mutations other than point mutations. As I pointed out in my docoral work, enhancing partial descriptions by John Holland, a crossover is the intersection of two hyperplanes in genetic space, one for the the initial string from one parent, the other for the terminal string of the other, since we are looking at the concatenation of the two. For inversion, there is a substring which is reversed in direction. That amounts to a reflection in a hyperplane corresponding to the inverted part, but fixed points on the non-inverted genes.
(7) Again, there are probabilities every step of the way. Edit distance is more like a Feynman integral — the number of mutations time the probability of those mutations, summed over all possible combination, properly weighted.
(8) “Noise” in the channel is itself hard to define. This is not the same as the mutations themselves.
(9) The set of organisms that exist and are possible is usually described in terms of the fitness landcape. It’s important to realize that the genetic algorithm in one generation starts with a population of points on the fitness landcape, kills off some with probability inversely proportional to their fitness, selects others with probability proportional to their fitnesses, and pairwise does crossover on the pair, throws in the statistically proper inversions and point mutations, and takes the child to place where a previous one was killed off.
(10) So we need to constrain the ensemble of fitness landscapes on which the process occurs, and see how the boundary between real and possible itself changes with each generation.
The above is an off-the-top-of-my-head crude handwave towards an approach to model evolutionary channel capacity. It looks like a PhD dissertation worth of work to get right, and several technical papers worth to even fill in major details to the sketch.
Over on Pharyngula is an explanation of the definition of gene: “A gene is an operational region of the chromosomal DNA, part of which can be transcribed into a functional RNA at the correct time and place during development. Thus, the gene is composed of the transcribed region and adjacent regulatory regions.”
Is this a useful hint of what I think might need to be done, to get from genes in one generation to those of the next, and the much more complicated problem of what that means in terms of transitioins of phenotype?
I’m prepared to take potshots on this from all sides, Mark, Sal, and readers of this great blog. I have learned that only through mutation is there variation, so only by mutating this sketch of a theory can there be an ensemble of next-generation theories, and the scientific method is supposed to describe how the theory interacts with the environment of theorists and experimentalists and “nature” to evolve to a higher fitness theory. Hence I must and do take criticism seriously.
This is a busy few days for me, with my already having been subpoenased as eyewitness in a criminal trial whose jury dselection starts tomorrow. Plus professorship job searching, the writings I’m doing to various deadlines, the usual logistics of a 3-person 1-dog household; plus 2 years of overdue tax returns; and so much more. Hence I may not respond to all questions immediately, but will try to be as responsive as time permits.
Dr. Vos Post – I’m curious about how we get to ‘channel’ in this context. Shannon appears to be positing ‘receivers’ independent of channel, yes? But in the biological case, this isn’t really happening. Is it valid to extend the definitions into this arena?
I’m not an information theory person, and my grasp of the higher maths is questionable (brains are more my thing), so I apologize if this comes off as too simplistic.
I should just lurk now, but Scarlet Seraph has asked a very good question. I spoke at length, face-to-face, over the years with Shannon. He acknowledged to me this defect in his theory.
The receiver (in Shannon’s theory) does NOT structurally change based on the message from the transmitter, as it comes through the channel. In real life, receivers CAN be changed. Ever heard someone say: “This book changed my life?” Or consider the effect on the audience of the speech (no transcript exists) by the pope, whose audience launched the First Crusade?
Shannon admitted that he knew this, but left it out, in order that the model be mathematically tractable.
In the evolutionary channel situation, the receiver is changed over the generation. The fitness of generation x+1 depends in part on its interaction between its organisms and those still around from generation x.
That there are nonlinear effects of populations of organisms on each other goes way back. See, for instance, Lotka-Volterra equations, also known as the predator-prey equations.
The more general interaction of n species of organisms on each other, through an nxn matrix of coefficients of n differential equations in n variables is the basis of fascinating work that applies to competition between n corporations in a market equally well, but I won’t go off on that tangent.
Is my sketch of a theory “valid” to the biological case, as asked? Not yet. Much needs to be filled and and maybe corrected. But the general approach is not, on the face of it, invalid. John Holland’s 1976 “Adapatation in Natural and Artificial Systems” book launched a thousand Genetic Algorithm ships. In my opinion it was the first that had enough in its biological model to be applicable to biology, and good enough math and computer science to be a breakthrough there.
There is the field of “mathematical biology” — which includes information theory, so there are certainly many people who assume enough validity to practice their craft.
Now I need to go back to lurking until enough feedback ensues and enough time in my overcommitted schedule.
Sal, I’m having a hard time pinning down your meaning, so I’ll try to lay things out carefully:
– I’ll assume that “life” and “life form” are definable in computing theoretic terms, even though you haven’t defined them.
– You said that the claim “A Dell computer is a life form” logically follows from Mark’s rendition of your previous claim, which is “Turing equivalent computational capability is sufficient for life”. I’ll assume that you classify the Dell computer as a TM, and specifically model it as a UTM, in which case it would have made more sense to say “A Dell computer can host a life form.” I’ll assume that you intended the latter.
– Sufficiency of automata types is a well-established concept. When we say that a TM can recognize a palindrome, we’re not saying that any old TM can do it, nor are we saying that a UTM without the appropriate program can do it. Rather, we’re saying that the Turing computing class is sufficient, as opposed to, say, pushdown automata.
– To the degree that a Dell instantiates a UTM, it can compute anything that’s computable, given the proper programming.
– If you don’t believe that a Dell can host a “life form”, then you apparently don’t believe that “life” is computable. Have I represented your position accurately? If so, then your claim is a strong one, and I invite you to support it.
Sal:
I was so caught up in the stream of babble that I didn’t even notice that Dell computer quote until secondclass commented on it.
There’s no polite way to say this. That’s a damned stupid argument, and I’m pretty sure you know it. You’re trying to use a strawman to weasel out of having to actually make your argument. Anyone who’s studied logic understands the concepts of sufficient and necessary – and you’re trying to play games to avoid actually confronting the fact that you have no argument.
You’re arguing that a Turing-equivalent computing device is necessary as a part of a living cell. What that means is that no living cell can possibly exist without some for of Turing-equivalent computational capability – that is, there is some key computational function that is part of the biochemical process of life that requires Turing-equivalent computing capability.
When I say that Turing-equivalence is sufficient but may not be necessary, anyone with two brain cells to rub together knows perfectly well what I mean: that the computational function that is needed by life can be satisfied by a Turing-equivalent system – a Turing-equivalent system is sufficient for that purpose; but that the full capability of a Turing machine may not be necessary: it may well be the case that a less powerful computational device would also be able to do the job.
Are you going to seriously maintain that you thought that my argument was that a computational capability was the only requirement for life? How could that possibly have made sense as a response to your argument?
I think I get it. “Life form”, “Turing machine equivalent”, “necessary and/or sufficient conditions for life” … this is indeed about semantics, about precise definitions of terms, just as I’ve been expressing.
Reminds me of the use of the term “life” being conflated to include “full human life, consciousness, and legal rights” as used by the anti-choice anti-abortion folks (whose groups obviously overlap with the religious IDists).
As I wrote, “My point is that we can diffuse the IDists by pointing out their semantic sleight-of-hand: so the universe is a design (a pattern of energy), so what? We can’t derive an anthropomorhic God scientifically from that, can we?”
Even though I don’t have the experience to fully grasp the above posts from various scientists, I can see that those posts are doing just what I advocated, but in a very educated and detailed manner.
If I’m just in the way, here, gentlemen, merely let me know and I’ll shut up. lol
Sal:
It seems you misunderstand fixation. Fixation is the theoretical state when an allele becomes present in all members of the population. Even under the neutral conditions of genetic drift, a mutation is replenished by it happening again (since it has some probability to occur), and eventually becomes fixed. It is only a matter of time.
Nitpicking that optical fiber systems are SNR limited on length aside, experiments with DNA computing has no bearing on gene expression and evolution. You must present numbers.
Jonathan:
I think you have a really good proposal for a model. As I noted earlier, some biologists are looking at natural selection from an information-theoretic standpoint by studying generations as you suggest.
“”Right now Chris is trying to understand natural selection from an information-theoretic standpoint. At what rate is information passed from the environment to the genome by the process of natural selection? How do we define the concepts here precisely enough so we can actually measure this information flow?” ( http://golem.ph.utexas.edu/category/2006/12/back_from_nips_2006.html#c006690 )
One possible way is that population models of asexual organisms looks exactly like bayesian inference models used in machine learning. Each individual allele is a “hypothesis” which after selection improves the populations “theory” of the environment.” ( http://scientopia.org/blogs/goodmath/2007/01/stupidity-from-our-old-friend-sal#comment-306296 )
So there is a possibility for synergies here.
But since you are discussing channel capacity, not selection, there are caveats. PZ Myers on Pharyngula notes that in the evo-devo model, the phenotype variation can become fixed by later genotype variations. Ie the genome behind an randomly expressed but beneficial phenotype can change until fixation. This would be another source for capacity. He has some posts on this.
Btw, on the post where he defines a gene, he and others notes that there are several definitions, at least 7 of them, which are picked to fit the model it is used in.
Norm said:”I don’t really understand what you guys are discussing; could anyone please tell a layman like me in simple language how this is pertinent to the real esscence of the ID question? Sum it up in a few sentences?
And please forgive my own impertinence. ”
Numbers talk, bullshit walks.
Sal wants to discuss channel capacity, per Shannon, but in some UNSPECIFIED relation to “evolutionary mechanisms.”
Sal cannot define what he means by this except to say “channel capacities of evolutionary processes… with reference to population rescources like numbers of organisms, numbers of mutations, etc.”
No models given other than what is mentioned above, no calculation of capacity or noise, no nothing. This is bizarre, but utterly typical of the vacuous ID “research” programme. No specifics, no math, no testable hypotheses…nada, zip, zero.
Hmmm. speaking of zip…wasn’t it about 10-12 years ago that some people were all worked up about using Zipf analysis to look at “patterns” in non-coding DNA stretches? As I recall, that didn’t pan out either — another example of incorrectly applying an analytical tool…kinda like abusing Shannon, eh, Sal?
Torbjorn: thanks for the link on Bayesian inference and alleles as hypotheses. That’s a novel approach!
Tracy P. Hamilton wrote, “Numbers talk, bullshit walks.”
That’s a bit more simplified than I was looking for, but thanks anyway.
Can’t the above posts about channel capacity and its pertinence to the ID pov be simplified for a layman (but not as simplified as Tracy put it)? Is my request too difficult?
Norm Breyfogle: Look at Von Post’s discussion above, particularly this bit:
“(3) The basic idea struggling to break free from Sal’s verbage is that we are treating a channel that leads in one generation from the genome of all the organisms in that generation to the genome of all the organisms a generation later”
DNA = information, and the question is how to treat it in Shannon terms of channel capacity and noise as it is “transmitted” from one generation to the next. Sal vastly oversimplifies the situation and just tosses out his “oh, yeah, it can be done” without even a genuine nod to the complexity of the task…Like Scarlet Seraph said,
these guys are reluctant to lay out the math, when they can be so successful in rhetorical ambiguity to a gullible public.
Von Post’s brief description of the difficulties inherent in quantifying this and in dealing with noise from “transmitter” to “receiver” along with editing in a “sum over histories” – type Feynman manner…is a far clearer description of a path than Sal’s blithe “channel capacities of evolutionary processes… with reference to population rescources (sic) like numbers of organisms, numbers of mutations, etc.” ( heh, I just realized I could get a pun out of this with “Feynman propagators” ). Anyhoo, I SHOULDN’T have said Sal was JUST misapplying an analytical tool, but he is certainly not providing anything by way of meat and potatoes. It’s an ungodly-looking task to my untutored archaeological eye.
And on that note, I’ve got a vacation to enjoy. Nice forum, Mark!
Norm:
I’ll make a stab at simplifying. There are two different arguments, so I’m going to use two comments – one for each.
One of the arguments that IDists have tried to use to argue against evolution is based on Shannon information theory. (Quick note here – but important – as usual, it’s an argument against evolution – not really an argument *for* ID. IDists seem to think that they’re the only alternative to evolution.)
Shannon theory was developed at Bell Labs by Claude Shannon. The original motivation was that AT&T wanted to be able to accurately predict how much wire they needed to lay to leave enough room for future growth that they wouldn’t need to go back and do it again; but they didn’t want to lay more than they needed, because copper is quite expensive. So Shannon, a really amazing mathematician, worked out a theory that allowed him to do an analysis of an information-carrying medium that characterized how much information could be carried by that medium. When you have a communication medium connecting a sender and a receiver, that’s called a channel, and the amount of information that can be transmitted by that medium is the channel capacity. There are two fundamental things that limit channel capacity: one is the basic properties of the medium – if you’re using a radio wave, the amount of information you can transmit is limited by the frequency. The other is interference from noise – the amount of noise on the channel defines a limit on how much you can reliably transmit without losing information due to noise. The second one – the maximum capacity with noise – is dependent on the first. (The idea is that the noise is also information – so any noise on the line must be subtracted from the theoretical maximum capacity.)
Sal’s argument is that you can model the process of evolution as communication of genetic information from one generation of a species to its descendents; and that if you do that, the quantity of information necessary for evolution to be able to work exceeds the capacity of the information channel. According to Sal and company, if that’s true, then that would imply that information is being added by some outside agency.
My response to Sal has been, essentially, that making that argument requires a complete mathematical definition of the evolutionary information channel. Just saying that there’s
too much information isn’t good enough – he needs to show us what the capacity is, and then show us that evolution requires more than that capacity.
Sal’s response has been basically to wave his hands around and try to just repeat “It’s too much! It won’t fit in the channel!”.
Norm:
Here’s the second part.
Sal has made arguments that evolution is unlikely because the way that DNA works resembles a computing device, and DNA is capable of Turing-complete computations. What that means is that if there is any computation which can be performed on any mechanical computing device, that computation can be performed using DNA as a computing device.
Sal’s argument is that the Turing equivalent computational capability of DNA is essential to the function of a cell, and that therefore, we need to explain just how a Turing equivalent computing device is embedded in every cell.
My response to that has been two things.
First, while it seems like Turing-equivalence – the ability of a device to perform any possible computation – is a sophisticated thing, in actually, it’s really quite trivial. It’s hard to create something that can do even trivial computations without winding up with Turing-equivalence. So the fact that something is Turing-equivalent in and of itself is not a profound or
surprising thing; it would be more surprising to
find a computing mechanism that contained one of the artificial constraints that limits computational capability.
Second, the fact that a living cell contains a Turing-equivalent computing device does not mean that a living cell needs a TE computing device. The cell
may not need anything so sophisticated – but because of the basic fact that things naturally tend to be TE, it’s got something more powerful than it needs – because it’s hard to create something less powerful! And therefore, if Sal wants to make the argument that life requires a Turing-equivalent computing device, he needs to show that it actually uses the full power of a TE device – that there is some necessary function for the cell that cannot be done with less than TE computing capability.
Mr. Chu-Carroll the above two postings are really excellent and a real pleasure to read, thankyou.
I have pointed to Haldane’s Dilemma and the U-Paradox for starters and pointed to the severe evolutionary speed limit issues. Under generous assumptions a fixation rate of 1 per 300 generations of a novel/useful nucleotide. Too slow.
I cannot believe Cordova is still hawking this Haldane’s dilemma non-issue.
Why ‘too slow’, Cordova? Because electrical engineer ReMine says so? Because you think, like he does, that all evolutionary change must require some huge number of beneficial mutations?
Simply writing ‘too slow’ shows not a problem for evolution, but the shallowness with which anti-evolution pontificators treat the issue.
ReMine’s treatment of Haldane’s model is disingenuous to say the least, and your continuing parrotting of it is just the usual sycophantery of misunderstanding.
deadman_932:
Thank you too for your info.
There is this Pellionisz who proposes a ‘PostGenetics’ concept of information and function in recurrence and fractals in junk DNA. He has recently approached the ID movement on the similar reliance of ‘information’, at least on the blogs. But evolutionary junk DNA should be DNA that has stopped to have any function AFAIK.
Now, if Zipf analysis is looking at power-law patterns in strings of DNA, and fractal recurrence gives power-law relationship from the self-similarity on all scales, Pellionisz hasn’t much to stand on then, at least in regards fractals.
It seems likely the whole idea is a dud, not withstanding that he defines junk DNA differently than most biologists seems to do. His paper that purports to support him does not get any conclusive results, yet he acts as if it did.
Currently, he seems to label most sicknesses ‘postgene’ since responses to them, or their treatment, are supposedly or verifiably influenced by individuals genomes. When all you have is a hammer, even borrowing another persons, look for nails. And now it seems his hammer is in question.
Torbjorn: A bunch of relatively recent papers on Zipf and Mandelbrot’s applications in regard to linguistics and genomics, etc., can be found at http://linkage.rockefeller.edu/wli/zipf/index_ru.html .
It was Von Post’s mention of market behavior that set off the memory-walk for me, since it was at the same time about 12 years ago ( and more) that nonlinear systems, fractals, Zipf and all that were being eagerly mined by market analysts.
It does seem to put a crimp in this Pellionisz’s claims — Cheers!
You misrepresent my argument, Mark.
I’m claim evolutionary mechanism by their own definition in terms of population dynamics and RMNS limit the channel capacity of information from the envirnoment being fixed into a genome.
Do you see the difference between your strawman misrepresentation and what I actually am claiming?
Don’t see much math going on there, Sal.
Show us with numbers how it wont work.
Many thanks in advance,
Rich xxx.
Just out of curiousity, Sal…why do you seem to inevitably slip into ambiguity-filled broken english when you’re confronted?
In your above statement, you’re saying the ENVIRONMENT is the transmitter and the genome is the receiver — HOW PRECISELY do “evolutionary mechanism” “in terms of population dynamics and RMNS ” — LIMIT A DEMONSTRATIVELY MODELLED CHANNEL THAT YOU CAN SHOW? You DO realize that to be taken seriously, you have to SHOW a model and specify the math, right?
You’re waving about claims concerning mutation and selection and “information from the environment” without any actual basis. And don’t try to point to the articles you cited previously, since THEY contain ridiculously inadequate assumptions as well. In short, Sal, you’re relying on an unsubstantiated analogy that you haven’t shown valid.
Well, there is the generalized form and then one that is specialized for communication channels that Electrical Engineers typically use (yes, and you’re welcome to belive I don’t have a degrees in Math, Computer Science, and Electrical Engineering), that is described by a mostly algebraic formula in the Shannon-Hartley theorem. So you can’t accuse me of never having computed it for a specific instance, because I have. Let the reader see, that under specialized conditions, it’s not that hard. See: Shannon-Hartley.
It’s somewhat straightforward to characterize the number of bits per second if one has an appropriate Signal-to-Noise Ratio (SNR) and if the channel noise can be approximated by White Guassian Noise. A sample calculation for twisted pair is given here: Balanced Twisted Pair as a Transmission Line. The reader can see that, in certain applications it’s not that hard to compute the theoretical maximum channel capacity.
However, apparently you missed the point I was making that such considerations of Shannon-Hartley in terms of SNR or even them more generalized form (which can in pricniple deal with non-Gaussian distributions of noise, described here in Channel Capacity) are a moot point if the channel capicity (defined as actual bits per generation) is limited by mechanical process of transferring information (evolution via natural selection into finite populations with finite reproduction speeds). The over riding constraint is the evolutionary mechanism, and those constraints are defined by population genetics. And it could be as few as 1 nucleotide per 300 generations according to Haldane’s 1957 papers on the cost of natural selection. Not really a lot of bandwith to infuse 180,000,000 nucleotides in 5 million years!
The actual channel capacity of an evolutionary mechanism cannot be faster than what the mechanism by its own rules would claim, and that is granting the assumption that any of the other concepts (like the capacities which Shannon defined) make such a transfer rate possible.
Regarding my claims of specified complexity being deteriorated by noise (which is the primary mechanism of RMNS), in brief, in Shannon’s world, information is the reduction of uncertainty. The Darwinists on the other hand effectively try to argue an increase in uncertainty (via RMNS) equals a decrease in uncertainty. You know, the classic illogical statement:
E = not-E
That is what I meant, in brief, and I will elaborate in time, assuming I don’t keep being derailed by your strawman misrepresentations of what I say.
Sal:
You complaining about strawman arguments is quite a hoot! Since you keep refusing to answer questions, instead going into long pointless wandering nonsense.
First – you still have not defined the communication channel that you’re talking about. I keep asking for a mathematical definition of the channel capacity, and you keep responding with long-winded non-answers. You’re supposedly talking about a well-defined mathematical quantity – but you refuse to actually do any math of any kind whatsoever. Your claim remains completely unsubstantiated until you present a Shannon theory definition of whatever information channel you’re babbling about, and use it to do a Shannon computation of the capacity of that channel. You keep trying to weasel out of doing that.
Second – you are deliberately misrepresenting Shannon. You’re playing a game here – on one side, you say that you can represent something about what the evolutionary process needs to “transmit” as a a message in the Shannon sense; and then on the other side, you assert that the mechanism that evolution posits as a generator of that information cannot add information – because you’ve shifted the argument, and now what was the message is now the noise disrupting the message. You can’t have it both ways!
There you go again equivocating what I mean by necessary. there are two senses of the word “necessary”.
1. necessary as in, does a full replicator need a turing-equivalent machine to replicate
2. necessary as in, we DEFINE a necessary property of life as having a Turing-equivalent computing device, just like we DEFINE a necessary property of life as having some sort of metabolism and means of reproduction. If you don’t like that definition, fine….
You are arguing #1 against my #2, and that is an equivocation. In any case, you’re welcome to keep insisting that life (well-known life forms) does not even contain a Turing-equivalent machine, but that would go against the prevailing view, as I pointed out. You might be in the minority for taking that position.
But if at least some life has Turing-equivalent machines, then that suggests design, unless of course you are prepared to argue mindless stochastic processes can make a self-replicating Turing Machine. I point out Hofstadter himself was puzzled at this:
PS
By the way, a previous comment of mine has been held up pertaining to Shannon capacity.
The channel is the environment to the population. The channel is RMNS transferring information into the population genome. I gave the on average channel capacity in terms of nucleotides at 1 per 300 generations (or 2 bits per 300 generations) which is some trivially small number of Hz. I pointed out that this is granting that SNR or other considerations will allow such an evolutionary mechanism to operate that quickly.
To give you an idea of how tough this is, in the extreme consider we have 5 Billion people on the planet geographically separated. How long do you expect a beneficial mutation (a single nucleotide, say an insertion) arising in one individual — how long would that nucleotide mutation take to overtake the entire population such that every living person on the planet has that mutation? Pretty long, if ever! Of course, you can speculate the population in the past was small and well stirred, but let the reader see, it ain’t so easy to phyletically transform a population even by the definitions and equations of the theory itself.
There are two issues:
1. is the Darwinian mechanism self-contradictory based on Shannon’s definition of information? (Darwinism unwittingly argues increase in uncertainty = increase of information )
2. Even granting Darwinian theory is not self-contradictory, is their sufficient time for evolution to work (in other words, the channel capacity problem)
Perhaps that will clarify some of the issues for the readers.
Thank you for unspamming my previous comment. Much appreciated.
Sal:
No, it’s not an equivocation. As I keep explaining: anything with a computational capability will tend towards Turing equivalence. So the fact that a computational mechanism within a cell is TE is not surprising. But you keep on insisting that life requires a TE computing device.
You want to make the claim that “A Turing-equivalent computing device is a required component of life”. You don’t get to define “life” your own way, and put any arbitrary requirements on it that you want. If you want to make that claim – that a TE computational capability is required by life, then you need to support that claim – by showing one single example of where that TE capability is required. Instead of doing that, you continually play definitional games to avoid the actual argument.
MarkCC observed:
Actually, we’ve got a pool going over here about how long he’ll keep it up. . . .
Sal:
Saying “The channel is the population” is not a valid mathematical definition of a Shannon channel. I repeat, if you want to use Shannon theory to argue that the channel is insufficient, you need to show a proper, valid Shannon definition of the channel, and a Shannon computation of its capacity. You keep refusing to do that – finding all sorts of ways of weaseling around.
No matter how much you weasel, there is no argument until there is a valid definition of the channel, and a valid computation of its capacity. Without those, there’s no meaningful way to debate the point – you can continue to just assert that the capacity isn’t there, and nothing anyone can say will refute that, because you’ve left the definition open enough for you to weasel out of any
counterargument.
So put up or shut up: where’s the valid definition of a Shannon channel? Where’s the valid computation of the Shannon capacity of that channel?
How does the size of the population figure into this? Is the channel capacity of the population that includes only me the same as the channel capacity of the population that includes all 6 billion people on earth? If not, which one has an average channel capacity of 2 bits per 300 generations, and how did you arrive at that number?
Thanks for the lucid simplification, Mark CC. Still, I’m afraid I’d have to do quite a bit more study before I can engage seriously in the details of the above arguments. I think I’ll have to stick to the generalities for a while to come.
And generally speaking, it does indeed seem to be entirely beyond our abilities to determine the mathematical likelihood of evolution being guided by an “outside” force (not to mention the additional difficulties arising from the semantic fuzziness of the phrase “intelligent designer”) simply by looking at the channel capacity of DNA.
Let me ask the biologists on this board something that’s been in my mind re evolution for years now: doesn’t it look like it’s a very huge – possibly anomalous – leap in a relatively short evolutionary time between protohumans and modern man, especially in terms of brain size?
No agenda here; I’m just asking for your educated opinions.
(1) My frequent coauthor Dr. Philip Vos Fellman believes that the issue of channel capacity of evolution natural selection, raised however imperfectly by Sal Cordova, is a worthy topic for a pper by myself and Prof. Fellman;
(2) We would likely present it at ICCS-2007.
Abstract Submission Deadline: June 30, 2007
Early Registration Deadline: August 15, 2007
Paper Submission Deadline: August 31, 2007
I’ve presented many papers at ICCS-2004 and ICCS-2006, many with Dr. Fellman; and I chaired 3 sessions at ICCS-2006. The conference always draws several Nobel Laureates, is delightfully interdisciplinary, and hopefully will have Blake Stacey again and other of Mark CC’s readers. ICCS-2007 will have a track on Systems biology:
“High throughput data and theoretical modeling are combining to create new opportunities for systems understanding in biology. In addition to the comprehensiveness of genome-scale analysis of molecular pathways and networks, we are particularly interested in building toward an understanding of living systems at all scales and levels of organization. This will include aspects such as: emergence of higher-order (system-level) features, pattern formation, multiscale representation, etc. You are invited to submit abstracts/papers in experimental and theoretical areas of systems biology. Topics include but are not limited to studies on:
* System levels
o DNA/Protein sequence analysis: genome-scale comparative analysis, motifs, evolution
o Regulatory pathways/circuits: stochastic simulation; deterministic, non-linear dynamics, in situ pathway visualization
o Molecular networks: topology (global structure, local motifs) and dynamics
o Cell and organismal physiology: Cell migration, Multi-cell behavior, Systems control, Homeostasis and disease, Scaling laws
o Development: Spatiotemporal patterns, developmental constraints, robustness
o Behavior: brain and behavior, group dynamics
o Population and evolutionary dynamics
* Concepts
o Robustness and Control
o Noise, Oscillations, Chaos
o Fractals, power laws, Time series
o Multiscale modeling
* Tools
o Genomics and Proteomics techniques
o Databases, data mining, analysis and visualization tools
o In situ imaging techniques (microscopic and macroscopic)
(3) Thanks to the reminder from Torbjörn Larsson, I went back to John Baez’s blog (allegedly the oldest in the world), and followed the link to “Chris Lee” — namely Prof. Christopher Lee at UCLA. I emailed my 10-point crude sketch of a theory and asked for his evaluation.
(4) I am more forgiving of Sal than some here, because, whether he offers an answer or not, he does raise an interesting question. Also, he liked my excerpt from paper about my dissertation research enough to say he wished I was on his side. I am on the side of truth, wherever it leads me. By the way, Chris Lee also seems to phrase the cahennel as one where information is transmitted by the environment and received by the genome.
(5) I am not sure about Sal’s use of global human poplation. I did do Mathematical Population Biology in grad school, right at the time when the math got harder, because the field was invaded by astrophysicists who recognized some biology equations and plucked the low-lying fruit. Population “bottlenecks” are relevant. When the population crashes, information is lost. This seems to have happened for humans at least once. Might be catastrophe for short time (I am NOT saying Noah’s Flood!) or slight diminution for long time (climate change in Africa, or tough times for early humans in Europe and Middle East).
There is a good question before us. I am going to commit some time. Thus I’m grateful to Sal on the one hand for riasing the question, hoever oddly, and to Mark and blog commenters for pruning the conversation and adding useful advice. What fun!
Sal, this nonsense here in response to Mark’s, “By golly, you actually got something right!
No, I don’t believe that you actually know or understand how Shannon defines channel capacity, or how it’s computed.”
You know the nonsense; where you say, “Well, there is the generalized form and then one that is specialized for communication channels that Electrical Engineers typically use (yes, and you’re welcome to belive I don’t have a degrees in Math, Computer Science, and Electrical Engineering), that is described by a mostly algebraic formula in the Shannon-Hartley theorem. So you can’t accuse me of never having computed it for a specific instance, because I have.”
represents a classic (but in your case all-too-common occurence): deliberate misunderstanding.
Since you clearly have reading comprehension problems, I’ll parse it out for you:
1) you were not accused of not having a degree in Math, Computer Science, or Electrical engineering – this is a red-herring.
2) you were not accused of never having computed any instance of a Shannon channel capacity – this is a red-herring.
You can either deal with what is being asked of you, or you can’t.
You might try; at the moment, you just look like you’ve no idea what you’re talking about.
I mean, in all seriousness, Sal – you’re not stupid. To continue to evade, avoid, misrepresent, deny, or just plain say idiotic things gets you laughed at.
Do you REALLY want to spend every single appearance on a blog being laughed at? I mean, REALLY?
OK, more silliness from Sal:
To give you an idea of how tough this is, in the extreme consider we have 5 Billion people on the planet geographically separated. How long do you expect a beneficial mutation (a single nucleotide, say an insertion) arising in one individual — how long would that nucleotide mutation take to overtake the entire population such that every living person on the planet has that mutation? Pretty long, if ever!
Naturally. There is a size limit on populations beyond which it becomes almost impossible to fix a single gene. This is basic genetics. Tell us something we don’t know, please.
Of course, you can speculate the population in the past was small and well stirred, but let the reader see, it ain’t so easy to phyletically transform a population even by the definitions and equations of the theory itself.
This is meaningless bafflegab; we know the population sizes were smaller; the fixation rate is must faster in geographically isolated small populations.
Sal, please learn something about genetics before you start spouting off. Thanks.
There are two issues:
1. is the Darwinian mechanism self-contradictory based on Shannon’s definition of information? (Darwinism unwittingly argues increase in uncertainty = increase of information )
No. It’s not. You have yet to demonstrate anything along these lines in your post. As Mark has pointed out, you’ve yet to actually define anything.
2. Even granting Darwinian theory is not self-contradictory, is their sufficient time for evolution to work (in other words, the channel capacity problem)>/i>
Since you haven’t defined the channel, how could we know.
Do the math, Sal. Define the channel (mathematically). Define the channel capacity (mathematically). Define the size of the ‘evolutionary message’ (mathematically).
Otherwise, you’ve said nothing.
Perhaps that will clarify some of the issues for the readers.
Not until you actually show that there is real math behind your assertions. C’mon, Sal – you’ve taken Comp Sci classes. Would any professor of yours ever accepted, “well, I claim my program works. I just won’t show it to you.” as an acceptable answer.
Screwing up Haldane is bad enough. Relying on it for social-engineering purposes is worse. Screwing up Shannon-Weaver is typical. I’ll look forward to your peer-reviewed paper on this, Schmuckdova.
This is a resubmission. I’m presuming that it was filtered out a couple of hours ago because of the htlinks.
=========================================
(1) My frequent coauthor Dr. Philip Vos Fellman believes that the issue of channel capacity of evolution natural selection, raised however imperfectly by Sal Cordova, is a worthy topic for a pper by myself and Prof. Fellman;
(2) We would likely present it at ICCS-2007.
Abstract Submission Deadline: June 30, 2007
Early Registration Deadline: August 15, 2007
Paper Submission Deadline: August 31, 2007
I’ve presented many papers at ICCS-2004 and ICCS-2006, many with Dr. Fellman; and I chaired 3 sessions at ICCS-2006. The conference always draws several Nobel Laureates, is delightfully interdisciplinary, and hopefully will have Blake Stacey again and other of Mark CC’s readers. ICCS-2007 will have a track on Systems biology:
“High throughput data and theoretical modeling are combining to create new opportunities for systems understanding in biology. In addition to the comprehensiveness of genome-scale analysis of molecular pathways and networks, we are particularly interested in building toward an understanding of living systems at all scales and levels of organization. This will include aspects such as: emergence of higher-order (system-level) features, pattern formation, multiscale representation, etc. You are invited to submit abstracts/papers in experimental and theoretical areas of systems biology. Topics include but are not limited to studies on:
* System levels
o DNA/Protein sequence analysis: genome-scale comparative analysis, motifs, evolution
o Regulatory pathways/circuits: stochastic simulation; deterministic, non-linear dynamics, in situ pathway visualization
o Molecular networks: topology (global structure, local motifs) and dynamics
o Cell and organismal physiology: Cell migration, Multi-cell behavior, Systems control, Homeostasis and disease, Scaling laws
o Development: Spatiotemporal patterns, developmental constraints, robustness
o Behavior: brain and behavior, group dynamics
o Population and evolutionary dynamics
* Concepts
o Robustness and Control
o Noise, Oscillations, Chaos
o Fractals, power laws, Time series
o Multiscale modeling
* Tools
o Genomics and Proteomics techniques
o Databases, data mining, analysis and visualization tools
o In situ imaging techniques (microscopic and macroscopic)
(3) Thanks to the reminder from Torbjörn Larsson, I went back to John Baez’s blog (allegedly the oldest in the world), and followed the link to “Chris Lee” — namely Prof. Christopher Lee at UCLA. I emailed my 10-point crude sketch of a theory and asked for his evaluation.
(4) I am more forgiving of Sal than some here, because, whether he offers an answer or not, he does raise an interesting question. Also, he liked my excerpt from paper about my dissertation research enough to say he wished I was on his side. I am on the side of truth, wherever it leads me. By the way, Chris Lee also seems to phrase the cahennel as one where information is transmitted by the environment and received by the genome.
(5) I am not sure about Sal’s use of global human poplation. I did do Mathematical Population Biology in grad school, right at the time when the math got harder, because the field was invaded by astrophysicists who recognized some biology equations and plucked the low-lying fruit. Population “bottlenecks” are relevant. When the population crashes, information is lost. This seems to have happened for humans at least once. Might be catastrophe for short time (I am NOT saying Noah’s Flood!) or slight diminution for long time (climate change in Africa, or tough times for early humans in Europe and Middle East).
There is a good question before us. I am going to commit some time. Thus I’m grateful to Sal on the one hand for riasing the question, hoever oddly, and to Mark and blog commenters for pruning the conversation and adding useful advice. What fun!
Salvador wrote
That’s unadulterated bullshit. Even when a GA employs “elitism” — preserving the currently best solution into the next generation unmutated — all the rest of the critters are subject to mutation. In my company’s GAs we don’t even use elitism, so all critters are subject to mutation.
Regarding the model sketch of Jonathan Vos Post: If one wants to characterize evolution in information theoretic (Shannon) terms, there are two quite different approaches one might take. One is that you have sketched, where the transmitter is the parent population, the receiver the child population, and the transmission channel the various operators that govern/affect reproduction.
A second approach (they’re not mutually exclusive) is to make these assignments:
Transmitter = environmental variation, where “environment” includes the physical and biological variables ‘encasing’ a population of replicators;
Receiver = the genome of the population, where “genome” means the distribution of alleles in the population of replicators;
Channel = the set of evolutionary operators that alter the genome of the population through time as a function of differential replication due to heritable differences among lineages.
Then one characterizes changes in the mutual information of environment and genome through time to aassess the dynamics of evolution. Easier said than done, of course. This is apparently the ‘model’ Salvador has in mind, though it’s pretty muddled there.
Scarlet Seraph: Your posting style is reminiscent of a second-generation descendant of the Prague-born poet.
RBH
RBH said, Scarlet Seraph: Your posting style is reminiscent of a second-generation descendant of the Prague-born poet.
And you, sir, are a scholar and a gentleman – and most perceptive. %;->
Sal:
Well, it was you who started to discuss these channel capacities:
Jonathan described the capacities for all evolutionary processes, including population dynamics and the specific RM+NS mechanism, as a channel through generations.
What you are trying to do is to discuss another channel that will not capture the whole capacity and behavior of evolutionary mechanisms.
You can do that, but it is you who then makes a strawman of your initial discussion.
And you still have to come up with a model – the limited task you set is likely harder than Jonathan’s. You must model a very complicated receiver while denying yourself information of its states.
Sal:
This isn’t an existing and verified limit as you would be aware if you read the comments on Haldane’s model. From the PT thread I linked to above:
“Just pointing out that as far back as 30 years ago, in
Solutions to the Cost-of-Selection Dilemma
Verne Grant and Robert H. Flake
Proc Natl Acad Sci U S A. 1974 October; 71(10): 3863-3865.
http://www.pubmedcentral.nih.gov/articlerender.f…
..the basic assumptions/limitations of Haldane’s original model were laid out and the various “solutions”, some excellently discussed in this thread, were summarized: soft selection/intraspecific competition, population structure, non-independent fitness effects (Mayr , Mettler and Gregg)/truncation selection (King, Maynard Smith, Crow, Felsenstein), gene linkage, and drift(Kimura) and introgression.”
The limit you insist on assuming is for a specific model that doesn’t include all other evolutionary mechanisms mentioned above. Regards the scientific society, the dilemma was solved decades ago.
If you insist that this is a real limit, verify your prediction and show that all other mechanisms are wrong.
Sal:
First, on science, there is no Darwinists here apart from you, who insists on Haldane’s model with variation and selection. The rest of us is discussing evolutionary theory, which by now includes many more mechanisms. Also, no one has defined ‘specified complexity’ and showed that it is a useful concept by applying it in predictions.
Second, if you read Jonathan’s comment with his suggestion for a channel model, you can note that specifically “”Noise” in the channel is itself hard to define. This is not the same as the mutations themselves.”
The problem of defining noise in the specific channel is before your feet’s, not ours. Remember, it is you who insists that evolution should be considered from a channel capacity perspective, and that it will tell us something about evolution.
Let me also remind you that this will still not tell us something about creationism/ID. We want some answers there too.
Norm:
Since a channel limit (unlikely to exist, see discussion on Haldane’s dilemma above, and unlikely to be predicted, see discussions regarding the difficulties to model) would only be a negative argument against evolution, it would say exactly nothing on other theories, specifically not on ID’s speculations.
But Sal can’t acknowledge that.
Disclaimer: I’m not a biologist.
Yes and no. There are plenty of fossilized sculls from hominina, so one can see the trends. There is no leaps, it is continuous trends. One nice compilation is here: http://www.pandasthumb.org/archives/2006/09/fun_with_homini_1.html .
As you can see from the nice chart of 214 specimens, the trends and variations in cranial capacity overlap nicely in the last 3.5 million years. (The brain mass as percentage of body mass shows a lesser increase, of course, see the linked posts.) Remarkable trends are the rapid increase of the neandertal population who had the largest brains, followed by a drop from archaic sapiens to todays smaller sizes.
What it all means in terms of brain organization and behavior is less clear, to say the least. Though there are now data on that human genes that regulates brain development has varied a lot more than in comparable populations of chimps.
Also, there is DNA evidence that sapiens have picked up beneficial genes or alleles by introgression from separated populations of hominina, as they spread out over Earth. (One of those mechanisms Sal likes to imagine doesn’t exist.) That made us human were likely in large part all these other cousins of ours. ( http://johnhawks.net/weblog/reviews/neandertals/neandertal_dna/introgression_faq_2006.html ; http://isteve.blogspot.com/2006/12/greg-cochran-john-hawks-clan-of-cave.html )
Scarlet:
Oh. My bad – “a matter of time” and “almost impossible” aren’t sufficiently alike, at least in intention.
Sorry – the edited version above was supposed to be:
“Since a channel limit would only be a negative argument against evolution, it would say exactly nothing on other theories, specifically not on ID’s speculations.”
Torbjörn : Thanks for the link to the cranial capacity chart on Panda’s Thumb. And thanks also for acknowledging (with your “yes and no” comment) that my question wasn’t based on nothing. In fact, from those charts it’s apparent that cranial capacity rates (in general, including all the various hominid ancestors to modern homo sapiens), and even the rate of encephalization (though not as much) has indeed increased over the last 3 million years. Now, that is a much shorter timespan than, for instance, the “Cambrian Explosion,” and thus in the vastness of geological time, it can be considered to be a punctuated event.
So, why did eveolution “speed up” (at least until our huge, world-spanning population slowed it down)? And, how “punctuated” must an event be for it to be considered anomalous?
And, of course, I was specifically agreeing – though perhaps too clumsily and glibly – with the fact that such concerns don’t have a bearing on ID (due to the extreme fuzziness of the terms “intelligent” and “design” when implying a non-human, cosmic consciousness like “God”).
Norm:
Yes, it seems rapid for this layman, certainly more rapid than the parallel body mass increase. But if it is rapid in comparison with other changes or species, I don’t know.
The reason and means for hominina’s (sapiens and its immediate hominid ancestors, see Wikipedia) increase in brain size, change in brain organization and accompanied changes in behavior will be looked further at, I’m sure.
Several reasons and means have already been proposed, of course. Reasons and means that connects with this thread, and seems verified, are the “Out of Africa” bottlenecks (mitochondrial DNA suggests one female ancestor; Y chromosome DNA suggest a few male ancestors) and repeated introgression. Hominids who stayed put with larger mean effective population sizes didn’t seem to change as much.
We share many behavioral characteristics with other hominids, but socialization, especially language and learning, are markedly more important. For example, that we as the only apes have whites around our eyes to show others what we look at is taken as pointing to that.
Since socialization and language demands a lot from a brain, they could be important drive forces for change. And simple phenotype changes that satisfied much of that demand seems to have been the prolonged growth period in sapiens and the keeping of childlike features including brain plasticity.
“Yes, it seems rapid for this layman, certainly more rapid than the parallel body mass increase. But if it is rapid in comparison with other changes or species, I don’t know.”
But isn’t brain to body size dramatically greater in us than in all other species except for maybe cetaceans? And the bulk of our increase has been over the last 3 million years (according to the bulk of the data, ignoring any anomalous counter-evidence).
That’s a definite – even dramatic – spike, isn’t it?
My first word in my second paragraph above should be “And”; I don’t believe I was contradicting you.
The moment ID was born by lopping off from Biblical Creationism any reference to the Christian God (as required by U.S. courts), ID was destined to become nothing more than an ad infinitum iteration of the null hypothesis.
The stuff about “gaps” in the fossil record is a restatement of Zeno’s Paradox showing that motion cannot exist because each infinitely small distance must be overcome in a finite period of time, therefore movement through infinite subdivisions of linear distance X must require an infinite amount of time.
Zeno’s Paradox bothers me. If you’re willing to say that there are an infinite number of subdivisions that one must pass through, why can’t one say that the subdivisions become infinitely small? It seems to me that you must either stick with finite values throughout (in which case you must eventually run out of them, and be able to proceed) or you must be allowed to use infinity throughout (in which case the divisions become infinitely small and can be passed through in an infinitely small amount of time).
I mean, I know that Zeno’s Paradox is false, but it seems downright dishonest.
Here’s the easy part in calculating the entropy of Natural Selection. This is the death calculation, simpler than the birth calculation.
Suppose that at time t=0 (generation X) there is a population of organisms O(i) of size N of a sexually reproducing species. One organism is selected (scythed) from the N for immediate death with probability inversely proportional to its fitness (as
normalized by the population).
That is, there is a function
f : O(i) -> (0,1) which
maps each of the N to a scalar value which is normalized to a probability in the range (0,1). The scythe operation selects a specific organism A from the set of N with probability
(f(a)^(-1))/C where C is the normalization constant
C = SUM[from i = 1 to i=N](f(i)^(-1)).
Example: suppose N = 2, A has fitness f(A) = (1/4), B has fitness f(B) =(1/2), so C = (4/1)+(2/1) = 6.
Then the probability of scything A is ((1/4)^(-1))/6 = 2/3, and the probability of scything B is
((1/2)^(-1))/6 = 1/3. A is exactly twice as likely to die as B, since f(A) is a half of f(B).
Now, the Shannon information in scything A depends on the fitness f(A) as well as on the distribution of fitnesses of the other organisms in the population.
We are not surprised (little information) when an organism of tiny fitness is killed “by the environment.” We are surprised (more information) when an organism of high fitness is killed “by the environment.”
By Shannon’s definition, this entropy is
H = – SUM[from i=1 to i=N] P(i) lg P(i)
where P(i) is the probability of scything organism number i, and
lg(x) = log(base 2)x.
Again, there is a normalization constant (which secularly varies as the population evolves):
C = SUM[from i=1 to i=N] f(i)^(-1).
Substuting:
H = – SUM[from i=1 to i=N]
((f(i)^(-1)/C)lg(((f(i)^(-1)))/C)
= – (1/C) SUM[from i=1 to i=N]
(f(i)^(-1)lg(((f(i)^(-1))) – lgC)
= – (1/C) SUM[from i=1 to i=N]
(f(i)^(-1)(-1)lg(f(i)) – lgC).
Please correct me if I made an error in elementary algebra, or parenthesization.
For our N=2 example with f(A) = 2/3,
f(B) = 1/3:
C = ((2/3) * lg(2/3)) + ((1/3) * lg(1/3)) =
-0.918295834
which is less than a bit. If the two organisms had equal fitness, then the scything would be exactly 1 bit.
That’s the easy part of the calculation. The hard part is the next step. Pick 2 organisms D and E from the remaining population of N-1 organisms (or two copies
of 1 of N=1). Probabilistically make one offspring by some random combination of point mutations, inversions, on a random crossover of the genes of D and E. Place the child in the population, which now at time t=1 (generation X+1) has N organisms. That’s harder to calculate, as it has several random variables, or coefficients or probability distributions associated with each mutation and crossover operation.
But that seems to be a start to calculate what we want, namely a Shannon entropy, eventually a channel capacity (modulo a model of noise) in the model of evolution by natural selection.
Again, I stand by for corrections to this first cut of the first part of my first cut.
I have seen a diagram, perhaps on Talk Origins, that had some other animals nearly as much above a typical line.
That I don’t know. What I can see from the link I gave you is that brain mass has increased more than body mass.
What do you propose to compare with? I remember a recent article stating that the worlds largest flower had now had its ancestors established, a group with very small flowers and had underwent a dramatic increase in size. Perhaps we should dig that up and see if they have determined the time involved.
And again, that tells us only about speed of increase in mass, not capability and behavior which is really what defines what a brain does.
Oh, btw, if the diagram looks approximately exponential, it was IIRC somewhere noted that it is an expected answer from a fitness pressure. An increasing number of beneficial alleles adds effect multiplicatively.
“brain mass has increased more than body mass.”
Do’h! Brain mass has increased faster than body mass.
Jonathan Vos Post: Are you a biologist? When I clicked on your name I was taken to a website that seems to have nothing to do with you (at least, I couldn’t find your profile anywhere).
I see you’re describing channel capacity calculations (and if the equations you show are the easy part, I see once again why I’m an artist and not a mathematician).
If you are a biologist, especially one with expertise in Darwinian theory, I’d like to hear your speculations on my question. However it’s diced, there does seem to be an undeniable and dramatic spike in brain size and encephalization rates for our evolutionary line in the last 3 million years, compared to the rest of the history of evolution as evidenced in the fossil record (unless I’m mistaken; I’m not a biologist).
Is this true? And, if so, how might we be able to account for such an anomaly?
Thanks, Torbjörn. Apparently we were both writing and posting at the same time.
I’m curious what Jonathan Vos Post has to say about my question, as well.
Addendum to previous calculaion, and response to Norm follows.
Again using Google as my calculator, suppose the population is of size N=2 and the ratio of fitnesses is 10 to 1, or 100 to 1, or 1000 to 1.
(9/10)*lg(9/10) + (1/10)*lg(1/10)
= -0.468995594
(99/100)*lg(99/100) + (1/100)*lg(1/100)
= -0.0807931359
(999/1000)*lg (999/1000) + (1/1000)*lg (1/1000) = -0.0114077577
or very roughly half a bit, 8% of a bit, and (1/9)% of a bit respectively.
The arithmetic of this is fairly simple, even if the underlying equations are long enough to scare some artists.
Norm,
I also wonder about brain-size with respect to body size. There is the recent argument about whether dolphin and whale brains are big just for temperature control purposes in cold water. Neanderthals had brains roughly our size, maybe even bigger, as I vaguely recall. The largest human brain ever measured was of an idiot. Some geniuses had smaller than average brains. The subject is very confusing to me, even after all the neuro courses I took in grad school, and evolutionary biology courses.
I have researched and published in Mathematical Biology, and have taught Ecology and Human Evolution (for college credit in the first case, by assignment when the regular professor was unavailable, and for senior citizens not for credit in the second).
I have my finger in many pies, and try very hard to cross disciplinary boundaries. I detest the “Two Cultures” paradigm of C. P. Snow. I am also an artist from a family of artists, and a writer/editor from a family of writer/editors.
This thread is not about me, however. The teaching version of my resume is at:
http://www.magicdragon.com/SherlockHolmes/resumes/JVPteach.html
Some sense of what I was doing and publishing by mid-2006 is at my livejournal blog:
http://www.magicdragon.com/SherlockHolmes/resumes/JVPteach.html
For a more wide-screen analysis of who I seem to be, Google “Greatest Nerd of All Times.”
Norm: I did form-submit an answer over an hour earlier. But it had hotlinks, and thus autofilterably shuttled off to be reviewed by our blogmaster. You raised an interesting question. Patience.
Norm Breyfogle asked
I’m afraid you’re mistaken. The rate of (phenotypic) change due to evolution can vary considerably. See TalkOrigins (http://tinyurl.com/ypyo3q) for some background.
So the very recent rate of increase in brain size is not a “spike” at all, but is well within the range estimated from fossils (max = 32 darwins) and is considerably slower than some observed rates. Since the curves at Panda’s Thumb show that the rate has been increasing over the last 3 million years, the estimate for the recent period (7 darwins) is actually higher than the rate of increase was earlier in the period. So, no spike necessary.
(about rapidly increasing brain size…)
The speculation I’ve heard was that it may have had something to do with a change in diet to coastal food sources (oysters, fish, etc.). Large brains require a great deal of energy to work and a fair bit of fat to build; such a diet would provide both.
I can’t seem to find it online, but there was a fairly recent Scientific American article about different diet/lifestyle strategies used by primates and how they relate to intelligence. Some species specialize in low quality/high fibre diets (leaves) which don’t require much intelligence. They have slow digestions to get the most from the food they eat, and get by by not expending a lot of energy. Others specialize in higher quality foods. Ripe fruits, young shoots, insects, other animals, that sort of thing. Much less cellulose, more protein. But to effectively find, recognize, and exploit the high quality foods in their environments, they need to be much more intelligent than their cousins, and they are. They may even use tools to exploit some of these food sources. The trade-off is that these primates require far more energy in the course of their daily lives. They need the large brains and active bodies to exploit their high quality food sources, but they also need the high quality food sources to maintain their large brains and active bodies. Humans could be seen as an extreme example of this strategy.
But hominid evolution is messy. Some of our Australopithecine cousins appear to have kept the bipedalism but started specializing on lower-quality foods. Specializing on higher-quality foods isn’t necessarily better. (Sure these branches went extinct eventually, but they would not have appeared in the first place if there weren’t advantages to their strategy.)
Next step in calculating the entropy of evolution by natural selection is to calculate the entropy of sexual reproduction. The following is a mathematical simplification, of course, but not so simplified as to pretend that every possible pair of organisms is equally likely to mate (an assumption called “panmixia”).
Assume that we have a population of N organisms O(i) for 1 =
I posted:
“Assume that we have a population of N organisms O(i) for 1 =”
{it seems that I somehow cut out the line “for 1 =
Sounds like there may be some disagreement here re whether or not rapidly increasing brain size and encephalization rates constitute a “spike” in evolutionary rates.
Jonathon, Torbjörn, and Andrew actually offer some reasons for the increasing rate, but RBH apparently denies the increasing rate even exits as a “spike.”
RBH, perhaps you disagree with my use of the term “spike,” but the fact that evolution is in some sense radically accelerating is pretty undeniable in light of the exponential curve created when plotting not only brain size or encephalization over time, but also the increasing complexity of information over time.
Just map the history of life on a scale the size of the now lost World Trade Center, with the top of the top (108th) floor being the present day. In this model, the first living cells appeared on the 25th floor, fish on the 97th floor, dinosaurs on the 104th to 107th floors, mammals on the very top floor, and homo erectus in only the last few inches.
Obviously, the growth in brain size, encephalization, and intelligence describe an exponential curve, even appearing to approach an asymptote in a very dramatic increase within the last top floor and continuing more dramatically in the top few inches of the above model. If this isn’t a “spike,” I invite you to use any other appropriate decriptors.
So, why this radical increase?, is my question. I’d offer that it’s a function of emergent phenomenon ala chaos theory. Apparently it’s natural for evolution to speed up in terms of producing complexity.
Double sorry. The sign for “less than” is seen as HTML. So the line in question should read:
Assume that we have a population of N organisms O(i) for
1 less-than-or-equal i less-than-or-equal N
Not a typing error. A “forgetting the nature of the form input” error.
Btw and for the record, I only mentioned my being an artist and not a scientist or mathematician because I’m afraid I’ll inevitably stick my foot in my mouth in the company of so many great minds here on this blog and in the related science blogs in general (in fact, I have already). I intended it as the opposite of bragging and as an apology, instead.
Jonathon, I’ve now discovered some of your poems and other writing. You also mentioned that you’re an artist. Did you mean an artist of the word, or a visual artist, too? If visual too, I’d love to see some of that artwork. You’ve got a very impressive range of accomplishments, undoubtedly still far beyond that of which I’m aware.
Next, what is the entropy of a particular crossover of the chromosomes of two selected parents?
[Thanks for the kind words, Norm. Again, this blog is not about me, nor is this thread. Perhaps we can discuss my family’s visual arts, and mine, in some other venue.]
I set up this model to have probabilistic selection of an ordered pair of parents. Otherwise, if we’d selected an unordered pair of parents, we’d have an artifical 1 bit of choice of which parent provides the initial substring of the child’s chromosome. Again, “lg” means logarithm to the base 2.
So, given a selected ordered pair of parents, we further assume that a crossover is equally likely to be at any given point. Again, a simplified model. Further, we are simplifying the say that the crossover is an initial string (of codons) from one parent concatenated to a terminal string (of codons) from the other parent.
Visually symbolizing the crosover, at random crossover point “x”:
Before:
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
During:
AAAAAAxAAAAAAAAAAAAAAAAAAAAAAAA
BBBBBBxBBBBBBBBBBBBBBBBBBBBBBBB
After:
AAAAAABBBBBBBBBBBBBBBBBBBBBBBB.
For in vivo, the crossover might be more complicated. If there were two random crossover point “x” and “y”:
Before:
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
During:
AAAAAAxAAAAAAAAAAAAAAAAAAAAAyAAA
BBBBBBxBBBBBBBBBBBBBBBBBBBBByBBB
After:
AAAAAABBBBBBBBBBBBBBBBBBBBBAAA.
Anyway, assuming a single equiprobably crossover on a chromosome of length L, the entropy is simply lg L. The redundant derivation is:
assume probability distribution for crossover at point i of L equally likely points is:
H = – SUM[from i = 1 to L] (1/L) lg (1/L)
= – (L)(1/L) lg (1/L) = (-1) lg (1/L)
= – (- lgL) = lg L.
Next, the entropy of the inversion operation. Again, we simplify by pretending that a single inversion takes place with probability P(I) and with a loop equally likely to be of any length up to L. We visually symbolize the inversion by randomly selecting two inversion points, between which the string is rewversed in direction.
Before:
123456789
During:
12x3456y789
After:
126543789.
These are a subset of permutations, or endofunctions, but that’s irrelevant to this model.
So what is the probability of a specific inversion? It is the probability of picking an ordered pair (x,y) where x is equal or less than y, the lesser is no less than 1, the greater is no greater than L, times the probability P(I) of invoking the inversion operator at all.
By our oversimplification, we have L^2 equiprobable choices, so the entropy at this step is lg (L^2) = 2 lg L.
Next we look at point mutations.
Lunchtime for me now. See you later.
Whoops. Of course I meant:
P(I) 2 lg L where P(I) is (constant or very slowly changing secular variable) the weighting by the probability of doing an inversion at all. The more sophisticated model would have a probability distribution over a varying number of possible inversions of different probbability for given length each time.
John et al:
I don’t mean to drive you away from commenting here; it’s interesting, and I hope to find some time to contribute at some point. But I think that the comments thread of this entry is an extremely awkward place to try to do this. It might be worth setting up a dedicated blog at Blogger (which is free), and then making each of the longer comments with new progress into full posts, which each have their own comment threads.
This comment thread is already so long that it’s hard to find anything; it’s giving me a headache trying to follow the development of an actual real, original scientific model mixed in the middle of the flamage and other discussions.
Mark: Of course, you’re right again. Thank you for being so patient so long. Thank you for calling my rather elementary fumblings as “the development of an actual real, original scientific model.” It speaks well for your blogmaster abilities that you were able to steer this thread from windy seas and fog into views of land.
I am already wondering how to incorporate more reasonable assumptions, and how to find the appropriate parameters of inversion probabilities, point mutation rates, and the like for some well-known organisms such as E coli or D melanogaster, or even H sapiens.
I shall probably put this on my livejournal blog, restarting with a typo-removed version. If and when I do so, I shall email you, and you may decide (threadwise) how best to point your readers in that direction.
Here you go: Modeling Evolution at Blogspot.com.
Andrew Wade: besides increased encephalization as a result of dietary change, there’s also the theory that the increase may be partially due to cranial heat dissipation.
While this may sound outlandish at first glance, it has some good points for it if you consider humans as long-distance runners and hunters in a relatively arid landscape, and that the head radiates a large amount of heat. I don’t recall who had been doing this work offhand, but I’ll do a search.
Von Post: Hah, for once we *didn’t* have any real wind damage here in Altadena — at least on my block.
I would also like to thank Mark for his patience, considering that Sal’s arguments should be the main topic.
I would like to make two comments here, though. First on brain development, and second on Jonathan’s interesting model since I think Jonathan should have the opportunity to start the thread on Blake’s link.
Norm:
As I pointed out, it seems like exponential response is expected to fitness pressures, making brains no exception. RBH’s measure of darwin points to that too, being the characteristic time for such a process.
By coincidence, I happened on two blogs that contributes here. The first pointed to a paper comparing genomic changes in humans and chimps to an old world monkey (OWM) baseline. It turns out that brain genes evolves slower than other tissue-specific genes. “”When tissue-specific genes evolve more slowly than the rest of the genome, the explanation is most likely a stronger selective constraint in that tissue.”
OTOH, both humans and chimps have evolved faster than the OWM. What evolves most, compatible with the constraint above, is genes targeting regulation. (But it seems the paper only looked at expression of genes in grown individuals, so developmental genes aren’t as visible.) ( http://biology.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2Fjournal.pbio.0050013 )
The second blogs mentions a report on gene expression in the mouse brain. “a surprisingly large proportion of the genome is expressed in brain — around 80 percent of all identified genes. Most of these are expressed in only a relatively small subset of genes”
“subcellular function in neurons is identifiable by looking at gene expression in different microanatomical regions. That’s pretty cool. It goes to show that the neurons are the most specialized adaptive part of this whole party. The neuron is a finely tuned machine.”
And probably hominids has gone on to develop more of this. So the brain is a complex and diversified system which develops under severe constraints. Some noted above on energy requirements, cooling requirements, but it is also a heavy thing to lug around on top of a flexible neck.
So it is likely that brain mass isn’t as much developed comparatively, nor is it a particularly good measure of what a brain does. (Which is not surprising if one thinks about what a small bird brain is capable of.)
Jonathan:
Thank you for sharing your ideas! (And for your notifying comment on The n-Category Café, clearing up any possible misunderstandings regarding references to Chris’ model.) I’m interested in models such as Chris’ and your’s, and would be very interested to see your proposed paper when it appears, or the preliminary versions on your livejournal.
When you continue, on the livejournal or on blogspot, I have a question on your sexual reproduction model. The other two parts, entropy in natural selection and crossover/inversion was clearly motivated and even looked at in parameter space. But I don’t quite see how sexual selection works as “surprise”.
I had only time for a quick look (but your model deserves more, I know), and in case of equal fitnesses, regardless value I think, we see the maximum surprise (2 bits). Is it because selection is modeled separately that it isn’t surprising to see lower fitness specimen reproduce? And then the surprise, higher information, is that the specific pairing comes about when it is equally desirable (fit) specimen?
“Again, I stand by for corrections to this first cut of the first part of my first cut.”
A typo in the first part of the model: “C = ((2/3) * lg(2/3)) + ((1/3) * lg(1/3)) = -0.918295834”.
Should probably be entropy “H =”.
Btw, where was a post once on Panda’s Thumb on a GA model with a limited population, where they switched selection on and off and noted the specimen average of difference (IIRC) by some measure of entropy. IIRC the numbers could be comparable to yours in order of magnitude.
Torbjön, Jonathan (sorry about calling you John), Norm et al:
No need to apologize or thank me. I really didn’t intend to drive you away from commenting here! I just thought that the
discussion was interesting enough to be worth pursuing, and that it deserved better than being tangled into a massive thread where it’s all mixed up with my arguments with Sal, JD, etc.
Mark (I assume):
No sweat! And the thread is becoming unwieldy.
Norm:
Somehow the link to the second blog on mouse genome expression didn’t make it: http://johnhawks.net/weblog/reviews/genomics/brain/brain_atlas_gene_expression_2007.html .
Jonathan:
They are mentioning this thread on PT now, see http://www.pandasthumb.org/archives/2007/01/junk_dna_lingui.html . And they mentioned the GA work; google Schneider (AND Panda’s Thumb for the post I mentioned), for example: http://www.pandasthumb.org/archives/2004/05/shannon_entropy.html with references. “the excellent work by Schneider has shown that in the Shannon sense, information can trivially arise in the genome under processes of variation and selection.”
Also, my last question is probably unreadable. I meant to ask “And in that case, is the surprise (higher information) that a specific pairing comes about (since it is maximized in equally desirable (fit) specimen)?”
Norm,
Don’t put me down on the “pro-spike” side. I’m not really in a position to say how unusual the rate of change for our species is.
Er, mammals would be from the 104th floor on too; they appeared fairly soon after the dinosaurs did. (And I wouldn’t want to argue that they’re really any more complex than the dinosaurs anyway). I’d point out that the metazoans (animals) only show up from the 95th floor onwards. There’s an upward trend in brains sure, but it’s not obviously exponential (or asymptotic).
There I think you’re on to something. The key innovations I would point to would be sex, followed by multicellularity. Our current genetic code probably was a key innovation as well, being well optimized for mutation-tolerance, but there’s no way to tell that from the fossils.
The difference in brain capacity and *especially intelligence* between us and all other species (except for cetaceans) is so great, and the same awareness difference between primates and all other animals (again, except for cetaceans) is also very great, and it’s all taken place in the last few inches of a 108 floor model.
“Increased speed” and “spike” are clearly synonymous in this context.
I won’t post anymore on this subject on this thread, as per Mark CCs request.
Just in passing, I note that Sal has – as always – cut and run. I’m not sure which is funnier: his scientific illiteracy or his cowardness.
Where did I specifically use the phrase “Shannon channel”?
Or are you putting words in my mouth? When I threw out the phrase “channel capacity” at Ed’s weblog, I said:
I did not use the phrase “Shannon Channel”. That’s your misrepresentation. Let the reader google “Salvador Cordova” and “Shannon Channel”. Any relevant hits??????
Channel capacity has been used to mean bits per unit of time, the channel capacity (in the pure Shannon sense) has not been the only usage of the phrase of “channel capacity”.
Oh, but I’m familiar with the Darwinist game. Use one definition of a phrase, when it is clear I’m using another or at least somewhat more colloquial usage of a phrase. Keep quibbling over definitions, rather trying to represent or understand what has been said….
In any case, here is one usage of the phrase: Channel Capacity. Now you can go rail and post about them too….
The concept of channel capacity is in practical terms broader than the limiits imposed by Shannon’s theorems. For example, twisted pair has a theoretical capacity as defined by Shannon, but the mechanical devices throughout history have never marketed the actual, or practical channel capacity at the theoretical maximums. That’s why we had modem communication rates over twisted pair ranging from 300 Baud to wherever they are today, even though they are below the theorical maximums defined by Shannon’s theorem.
If you refuse to accept the sense of my usage of “channel capacity” it only demonstrates that you find yourself resorting to dubious quibbles. You seem to grant a far more charitable reading to self-contradictory theories of Charles Darwin and modern day friends.
I did define it as the interface between the environment and genome. If we are talking about humans, we can define the channel as the interface between the environment and the human genome. The measure is how many nucleotides are fixed into the genome. I pointed out, because the evolutionary mechanism is so slow, it’s a moot point to be trying to apply the Shannon formulas for channel capacity since the evolutionary mechanism being so slow will take precedence.
It’s like having a 1-bit per second modem over twisted pair when the requirements are for a 1 Billion bits per second bit rate. Shannon-Hartley will be a moot point in such a case since the modem is so slow. In the case of human evolution, we might be dealing with as little as 2 bits every 6000 years. If you willfully fail to appreciate this subtley where I am simply trying to demonstrate the mootness of invoking Shannon’s theorem giving Darwinian mechanisms are so slow, fine, let’s move on to the next topic of evolutionary algorithms…..
I point out the problems with the other mechanisms. See: What are the speed limits of naturalistic evolution?
Even these “mechanism” are mostly speculations outside of rigorous empirical scrutiny. We don’t have the means right now of doing sufficient amount of sequencing to prove some of the ideas. For example, Kimura’s neutral theory might collapse if JunkDNA is found functional, and the interspecific sequence divergence between deeply conserved regions begins to show increases in Single Nucleotide Polymophisms (SNPs) over time.
I quote Kimura and Ohta describing the selectionist dilemma:
There you have it Mark, a calculation for the practical channel capacity for the evironment to the human genome via natural selection. 1 nucleotide per 300 generations, about 2 bits every six thousand years.
Sal:
Stop playing word games. You’re just trying to avoid the fact that you cannot do the math.
I originally said that you were using Shannon theory where it wasn’t appropriate. You disagreed, and claimed that you were talking representing information in evolution as communication on a channel per Shannon theory. Since then, you’ve repeatedly asserted that you can define a channel, per Shannon theory, that describes the limits of how much information can be passed. I’ve repeatedly asked you to present a valid mathematical definition of that channel, which you keep weaseling out of. You’ve complained that I’ve misrepresented what the channel is – but you have yet to present a mathematical definition of the channel. Now you’re complaining that I said “Shannon channel”, even though the whole discussion started because you claimed you could use Shannon information theory to define a channel, and show that the channel was insufficient.
I’m still waiting for the definition of a channel. And you’re still weaseling around, finding excuses for why I’m asking for the wrong thing – but continually refusing to say just what the right thing is.
The Kimura/Ohta quote is the closest you’ve come – but it does not say the same thing that you are asserting – and you quite conveniently don’t show any of the math.
So, I repeat:
Show your information theoretic definition of a channel; and then show how you compute that channel’s capacity.
I very strongly suspect that you can’t even do the first part of that, because I’ve seen you babble about information theory all over the net, but you’ve never shown a glimmer of understanding of how to actually use the theory, or do the math.
I see that Sal has done his usual: put in an appearance, made some non-comments, failed to address any of the questions actually put to him, and then bailed again. Quite entertaining.
However, he has left unanswered a number of comments that constitute such a distorted understanding of science and evolution that they should be called attention to.
Such as, “Some of these evolutionary biologist will say evolutionary biology doesn’t apply to OOL. Well in that case, neither has it solved the problem nor completely discredited the supposedly repackged creationist arguments against OOL.”
We have here a complex conglomeration of misunderstanding and illogic (illustrative, perhaps, of the problems that Sal is having defining ‘channel’ mathematically).
Of course evolutionary biology doesn’t apply to OOL: evolutionary biology doesn’t begin to work until we actually biological replicators. It’s pretty much equivalent to being bothered because synaptic activity models don’t apply to the formation of synapses, or being somehow ‘clever’ to point out that demolition techniques aren’t used in building buildings.
In short, it’s not merely ignorant, it’s logically incoherent.
And the theory of evolution does not apply to OOL, any more than theories of stellar evolution apply to the Big Bang.
By this statement, Sal reveals that he does not understand how science works, or how theories are properly demarcated.
And the ‘creationist’ arguments against the theory of evolution (oh, dear – Sal, didn’t Dembski mention that ID isn’t creationism? You shouldn’t keep getting them mixed up – otherwise you’ll be the ‘smoking gun’ that demolishes the ID position in the next Dover) are discredited because they have no substance or logic in them.
But back to the math.
How, Sal, can you construct a channel definition (mathematically, Sal) between the environment and a breeding population? After all, your choice of the ‘bandwidth’ of the genome is irrelevant – it’s the population that evolves.
Or do you really not understand what the theory of evolution actually says? Is that why you’re having so much trouble with this?
No I did not say that, Mark.
I specifically said Shannon definition of information is reduction of uncertainty. I said:
There is reduction of uncertainty across a channel. The channel itself is NOT the information in question anymore than the wires across which bits of information are transmitted is the information in question. The channel does not equal the information in question, it is only the channel for information. Sheesh.
The concept that “aspects of evolution are modellable as a channel for information” is not equivalent to saying “evolution as communication on a channel”.
Let the readers do a “find” for that phrase and see if I ever said “information in evolution as communication on a channel”.
You continue to attribute and project ideas onto me that I do not hold and never intended to say. If you were unsure about what I mean you can ask for a clarification rather than projecting your erroneous ideas of what I actual said. You might find you mis-understood what I stated. However, I think if you understood what I said, you would be robbed of the strawman argument that you keep trying to knock down. If you wish to keep arguing against things I didn’t say or intended to say, fine….
I used Shannon to define information as the reduction of uncertainty. A channel is defined as the interface through which reduction of uncertainty happens. The bit rates are constrained by at least two considerations:
1. The limits based Shannon’s theorem
2. FURTHER constrained in practical terms by the modulating/demodulating devices involved
Even if #1 gives as much capacity as one would ever want based on the limits of defined by Shannon, #2 will be the overriding consideration, thus in the case of evolution, it is calculations and considerations like those given by Kimura at the nucleotide level, which amounts to 2 bits per 6000 years in the case of the human genome.
If #1 is allows something faster than #2, it’s a moot point because the channel is still constrained by #2. If #1 if allows something slower than #2, then evolution via natural selection is even slower than 2 bits per 6000 years with respect to fixations in the human genome, and that sinks Darwinian theory as well.
Thus, in light of the slow bit rates of #2, #1 one becomes a moot point. Do you understand what moot points are?
Sal:
More evasion, huh?
Why don’t you just admit that you were talking out your ass? You’ve modified your claim a half-dozen times in order to avoid admitting the fact that while you invoke information theory as a support for your arguments, in fact, you haven’t a clue about how to actually do the math to show it.
I’m going to keep hammering on the same points, because they’re crucial. You still claim that something about evolution can be modelled as a communication channel; and that the actual process of evolution as observed requires more information than can be passed down that channel. That’s a mathematical claim – and so you have to present it mathematically. As long as its just words, you can play these silly semantic games – “I said channel not Shannon channel”, and such. You can always shift the goalposts until you actually present a mathematical definition of the information channel, and a computation of its actual capacity.
So show us a mathematical definition of the channel, and how you’re computing its capacity. You can pick whatever formalism you want, you can pick what the elements of the channel and information are, you can pick how to present it, you can include whatever limiting factors you want, as long as you present them as a part of a valid mathematical argument.
But until you do that, you’re just bullshitting. Wave your hands in the air all you want, but what it comes down to in the end is that you claim to have a mathematical result concerning information theory and evolution, but you refuse to actually show the math for that solution – you expect everyone to just take your word for it that you did the math and didn’t make any mistakes.
One additional question: Do you even understand what I mean when I ask you for a mathematical definition? I’m beginning to think that even that is over your head, and you don’t even understand what I’m asking for.
Sal’s utterly convinced of his “conclusion,” but not because he did any sort of math to get there. As long as he doesn’t do or show any math, he can keep changing his mind about how he arrived at the “conclusion” and claim it’s us doing the “misrepresentation.” It’s kind of hard to accurately attack someone who never makes an argument while effectively forcing us to infer from his vacuum.
I think we need to define a new fallacy. Something like “moving the starting line.”
Such a mathematical definition is beyond me, but can anyone well steeped in Darwinian theory comment specifically on Sal’s assertion that evolution via natural selection slower than 2 bits per 6000 years (with respect to fixations in the human genome) would sink Darwinian theory?
Are you saying, Mark CC, that the estimate of 2 bits per 6000 years is not a good estimate of the speed of evolution via natural selection? And,is this estimate generally accepted as a good one by biologists or not?
Norm:
I’m not exactly up on the latest in bio-informatics, so I’m not sure what the accepted rate of change is. But I can point out one crucial flaw in Sal’s claim that it’s 2 bits in 6000 years.
Sal is using a model that completely serializes change. What that means, informally, is that Sal is making the assumption that at any point in time, there is exactly one change propagating through a population. His model is
make a single change; propagate that change until it’s fixed in the population; then make the next change. But in real evolution, changes are constantly occurring – there is no pause while an older change is becoming fixed into a population – changes keep happening and propagating all the time. So even if Sal’s number was right for what he’s arguing (and I suspect that it is not, since he seems to be pulling it out of thin air, without doing the math), it wouldn’t be an accurate bound on the rate of change – it would predict how long a single change could take to fix into a population, but it could not accurately predict the rate of changes.
So even if that 6000 year figure is correct, you could have one change in year one, another in year two, another in year three, …, with change one being fixed in the year 6000, change two in 6001, change three in 6002, and so on. The rate of transmission is dependent on the rate of change; if there’s one change a year that propagates enough to fix, then you’ll get one answer; if there’s 300 changes a year that propagate enough to fix, you’ll get a different answer.
Sal:
Your link doesn’t work.
So:
1. You can’t verify your prediction, you don’t have any evidence that the limit for fixation is 1 nucleotide per 300 generations, and you don’t have any evidence that this limits all mechanisms of evolution. In fact, there are plenty of verified mechanisms that directly invalidate the assumptions made in that calculation.
2. You can’t show that the other mechanisms are wrong. Not surprising, since many of them are verified. For example, I’m not a biologist, but I can read the fine script. “As of the early 2000s, the neutral theory is widely used as a “null model” for so-called null hypothesis testing.” ( http://en.wikipedia.org/wiki/Neutral_theory_of_molecular_evolution )
That is, the neutral theory is verified and accepted, and keeps getting verified in each test of lineage divergence.
And specifically, making this claim of your false:
Junk DNA is DNA that will show no functione, predicted from similar organisms having widely different amount of seemingly non-functional DNA. Which DNA is junk is difficult to establish because it is currently a negative test, eliminating all other functions. So, some of current junk DNA will surely be found functional, most of it is predicted to be not, and neutral theory works in spite of your assumptions.
No, it is not. IIRC this is a figure you have calculated over on PT, specifically between chimp and human, from differences in number of nucleotides. It is not a number for typical evolution, and it is not even a number telling us the difference in genes between species. To make it even more wrong, you have forgotten to include lineage divergence, ie counting from the last common ancestor.
As noted before, neutral theory is complementing other evolutionary mechanisms, and apparently accepted as the default mechanism in lineage divergence.
So this remains:
Regards the scientific society, the dilemma was solved decades ago. If you insist that this is a real limit, verify your prediction and show that all other mechanisms are wrong.
Norm:
Sal is babbling about a number which biologist Haldane got from a model in population theory. It is called Haldane’s dilemma, since it is supposedly setting a lower limit on maximum evolution rate than observed for example humans since the split from chimps. An electrical engineer named ReMine is promoting this dilemma as a real one, to the IDiots delight.
Needless to say, it isn’t a proper application of the model, which already Haldane suspected. Real effects in populations (finite population size, bottlenecks, neutral drift et cetera) obviates the assumptions of the model. (And the needed 720 genes/generation fixation rate Sal bands about is bogus; he is making a faulty calculation. No surprise there.)
Now, if you google Haldane’s dilemma, it may be that Wikipedia has a good discussion, but Talk Origins Creationist Claim answer is not up to date or entirely correct. A much better discussion (and the debunking of Sal’s 720 point mutations/generation is found here: http://www.pandasthumb.org/archives/2007/01/dissent_out_of.html .
Note especially comment #154819:
“Just pointing out that as far back as 30 years ago, in
Solutions to the Cost-of-Selection Dilemma
Verne Grant and Robert H. Flake
Proc Natl Acad Sci U S A. 1974 October; 71(10): 3863-3865.
http://www.pubmedcentral.nih.gov/articlerender.f…
..the basic assumptions/limitations of Haldane’s original model were laid out and the various “solutions”, some excellently discussed in this thread, were summarized: soft selection/intraspecific competition, population structure, non-independent fitness effects (Mayr , Mettler and Gregg)/truncation selection (King, Maynard Smith, Crow, Felsenstein), gene linkage, and drift(Kimura) and introgression.”
So this is a non-claim regards evolution since at least 30 years, laid to rest in a peer-reviewed paper.
Mark:
A heads up. Sal comments on the above thread:
“Indeed check it out. Chu-Carroll is pathologically incapable of accurately representing my ideas and claims. He is forced to resort to strawman arguments and disingenous reprsentations of what I actually said. I called the readers to google and find statements Chu-Carroll attributed to me. Any success?
By all means check out the exchange:
Chu-Carroll vs. Cordova
I don’t intend to allow him to keep attributing statements to me which I never made or implied. Such a line of debate is disingenuous at best.” ( Comment #157573 )
Apparently not what he would say to your face. 🙂
Ehrm! Pointing to the link would have been enough. Never post when in a hurry. 🙁
Norm:
I happened on a basic discussion of Haldane’s dilemma by a biologist. He discusses it since it has been used much recently on the creationist blog Uncommon Descent. (Imagine that! 😮 )
“A number of anti-evolutionists have taken this as evidence against evolution. If, they argue, genetic changes can only be fixed at a rate of 1 per 300 generations, how can evolution possibly explain the differences between species like humans and chimps, where not nearly enough generations have passed to account for the number of differences that we observe.
There are a number of problems with using Haldane’s calculations in this way, and in this post I’m going to look at one of those – the one that I think is the most important. For clarity, I should probably make sure that I am very explicit about what, exactly, the problem is before I start, so here it is:
Using Haldane’s 1 substitution per 300 generations as a speed limit for all evolution is wrong because Haldane’s calculations and concerns only apply under certain very specific circumstances.”
And he proceeds with a simple model that (with simple math, I promise!) shows how fixation works under normal, non-specific, circumstances.
“[ ] Haldane was looking at the maximum practical rate of evolution in cases where the environment had changed, and only the “mutants” were able to survive (and/or reproduce) at the old rate. [ ] This very specific situation is sometimes referred to as “hard selection.”
The situation I outlined does not take place in a changed environment, and does not result in any changes in population size. This is sometimes referred to as “soft selection,” and in situations like this the rate of change can be much faster because there is no need to worry about the effects of a shrinking population.” [Bold added.]
He also mentions the point Mark does:
“The anti-evolution objections to the speed of evolution assume that only one mutation can be moving toward fixation at a time. This is incorrect, but this post has already run long enough, so I’ll save that point for another post in a couple of days.”
( http://scienceblogs.com/authority/2007/01/how_fast_can_evolution_work.php )
Thanks, everyone. I take it no one can say in what range a maximum speed of evolution might be? I’d bet there are too many variables to even answer that question with anything beyond a highly vague, educated guess, right?
Is it even *conceivable* in any of your opinions that evolution (as far as we can estimate) *might* be too slow to account for (for instance) modern humans? Forget any fear that you may be providing grist for the IDist’s mill; I’ve got no agenda beyond rational exploration. Is it conceivable at all, based on what little we know?
After spending some time reading various blogs and their commenting posts on Panda’s Thumb I now consider that my immediate above questions constitute a rushed post, far too simplistic and general in content for anyone to bother answering.
Whether the theory of evolution can account for ALL the diversity of present life within the known parameters of time and probability seems unproveable. However, at least the ToE is indeed composed of workable, potentially-and-sometimes-actually testable hypotheses, unlike the IDist undefined and untestable axiomatic “primitive unknown” they call “intelligence”.
Thanks again for all your indulgences.
Actually, Norm, Kimura calculated the maximum number of evolutionary changes that could be simultaneously moving towards fixation (more than that and they interfere). It turns out that the maximum number of loci of change (not restricted to single nucleotide substitutions – a locus could be an insertion, deletion, inversion, deuplication, etc.) is about 5% the size of the effective population.
BTW, Haldane’s rate was 1 gene substitution per 300 generations, not 1 nucleotide substitution per 300 generations. A gene substitution includes duplication and inversion (see loci above), so the channel capacity using Haldane can’t be simply 2 bits per 300 generations if you are using nucleotides as your definition for bit. A gene duplication may involve several hundred nucleotides. Suddenly, that number Sal calculated looks rather ludicrous (if it didn’t already). You see, Sal, Haldane didn’t care how much change a gene went through in order to reach it’s substitution state, so you can’t use his rate the way you are trying to use it. Sal strikes out yet again.
Norm:
On Haldane’s dilemma a commenter on a PT thread noted that as far as he remembered of reading Haldane’s original paper, Haldane was actually trying to account for the slow speed of evolution seen in some cases.
That could still square with the Talk Origins quote on Haldane being concerned about his models applicability. In any case, it points to the need to read the original research before discussing it. Haldane was doing this 1958 I believe, DNA was discovered 1953, and many evolutionary mechanisms wasn’t yet discovered (such as neutral drift).
And it would be very ironic if it is true.
This is like asking if general relativity can account for the fall of all masses.
And it is begging a pet peeve of mine, how theories work. While induction, taking every lawful observation as proving a law, is useful to suggest and support theories it isn’t how we verify them. We make predictions, we test for falsification, and we accept a successful theory.
Now, we can’t accept a theory as ‘maybe’ working but must accept it as the default theory based on the current evidence; it is what the evidence tells us.
It becomes the task for new data and/or other theories to disprove the currently accepted. So the proper formulation of the above would be:
“Whether another theory can account for SOME of the diversity of present life within the known parameters of time and probability is unproved.”
(And by now, unlikely. As in physics, there is simply no longer room for new theories in some areas. Deeper theories and new mechanisms added, yes, but still explaining the same old mechanisms and current data.)
Corollary:
“Since ID is consisting entirely of negative arguments against the well tested theory of evolution instead of making positive and new predictions, it is guaranteed to not be that theory.”
“Now, we can’t accept a theory as ‘maybe’ working but must accept it as the default theory based on the current evidence; it is what the evidence tells us.”
I thought I expressed that? =)
Thanks for the other corrections.
W. Kevin Vicklund:
Has anyone detrmined whether the maximum number of evolutionary changes that could be simultaneously moving towards fixation can in fact account for the fossil record’s evidence of actual evolutionary change? For instance, can the accepted top rate for such change account for the differences between protohumans and modern man, or for the evolutionary development of sex(to cite two examples)?
Norm, I’ve seen numerous citations that the maximum observed rate of evolution (in controlled laboratory experiments) is about 5 times faster than the fastest known historical rates (and thus can easily accomodate the fossil record). How this actually compares to your particular question, I’m not sure. But at a (relatively small) population size of 10 million, the maximum number of simultaneous changes moving towards fixation in a sexual species is around 500,000 according to Kimura’s equation. I would be very surprised indeed if the fossil record required that many simultaneous changes, but I am not the person to ask.
Then it appears Sal has no case.
Of course, for the majority of free thinkers (those that aren’t particularly well versed in all the science comprising evolutionary theory), sad to say, the issue will remain one of faith, i.e., which experimental results, probability determinations, scientists, etc., one tends to trust.
Having long ago worked through my own philosophical/psychological/ spiritual outlook, I know I don’t have much confidence in the evidence and interpretations of such coming from the IDists.
Don’t pretend my correction of your misuderstanding and misrepresentations of what I said as evasions.
I have repeatedly contrasted what I said to your misrepresentations of what I said, and you’re not fairing so well.
Glad to see you’ve stop attributing your straw man characterization to me and now you are starting to represent my position accurately. Recall, your former strawman of my position was previously:
Which is what I didn’t say. Don’t like being confronted with the fact you attributed something to me which I didn’t say, eh, Mark? Nice to see you not repeating your misrepresentation this last time around.
Anyway, what you just said is more accurate:
That is more accurate. After 300 posts, you finally are trying to represent my claims accurately.
No goal post moved, Mark, I didn’t use the phrase “Shannon Channel” that’s you attributing things to me which I did not say. Is that the way you Darwinists like to debate? Fabricate things your opponents didn’t say, and then pretend you defeated those arguments?
Any way here are some details.
A population has common DNA. We can label such as that DNA which is fixed.
Each DNA position in a genome has an information carrying capacity of approximately 2 bits since there are 4 possible configurations. That is not too difficult to understand is it Mark? What part of that do you not understand.
If you can’t understand the basics without distorting and misreprsenting the ideas I’m putting on the table, this discussion isn’t going anywhere. Of course, that would be your desired goal, given now that I might actually have a point you want to evade.
Now, Mark, do you understand that within the context of a genome, we can model a single DNA nucleotide as carrying 2 bits of information?
So, Mark, what part of that do you wish to misunderstand? Once you comprehend that, we can go on to the next step toward defining channel capacity.
Since a fixed nucleotide in a population corresponds to 2 bits, then the question is, how quickly can a populations collective genome acquire 180,000,000 nucleotides. By the way 180,000,000 nucleotides correspond to 180,000,000 x 2 = 360,000,000 bits? I mean gee, 180,000,000 nucleotides with each position having 4 possibilities leads via discrete math to 4^180,000,000 possibilites, which implies each sequence of 180,000,000 nucleotides can carry
log2( 4^180,000,000) = 360,000,000 bits
Is the math over your head, or you just pretending you don’t understand what I’m saying?
Oh gee, Mark, I’ve been saying 1 nucleotide corresponds to 2 bits for the last month. Seems that hasn’t sunk in. Well, it seems, the authors of this IEEE paper came to a similar conclusion:
Client Side Decompression Technique Provides Faster DNA
Sequence Data Delivery
DNA sequences hold only four letters (A, T, G
and C) and it is efficient to use only 2 bits per symbol
Well, gee, Mark, a little effort on your part an you can see that each nucleotide in a population can correspond to 2 bits.
So if a population must FIX 180,000,000 nucleotides in 5 milion years, that corresponds to 360,000,000 bits having to cross a communication channel in 5 million years. So via these considerations, I’ve defined the necessary channel capacity for evolution to succeed for something like humans. It must on average infuse 72 bits per year or roughly 1440 bits per generation.
Do you not see the numbers? What part of the problem do you not understand? That is the approximate amount of channel capcity needed. The question is then can evolution via natural seleciton deliver that channel capacity. I will address that if you can demonstrate you even understood what I’m trying to convey…..
Sal:
What I see is three things.
(1) You still have not provided a mathematical definition of your channel. I’m pretty certain by now that you have absolutely no clue of what a channel definition should look like. What you’ve done above is just reiterate your argument – arguing about the capacity of the channel – but without any definition of the channel. You still keep hedging your definitions. The argument you presented above seems to be arguing for the idea of the channel being the genome itself, transmitted from generation to generation – but you’ve insisted in the past that that is not what it is. So I repeat: define the channel.
(2) You’re still evading questions by pretending that I’ve somehow misrepresented you. You are the one who insisted that Shannon theory was appropriate for measuring information in your argument – and since then, you’ve quibbled, bullshitted, and thrown all sorts of insulting accusations about my repeated attempts to get you to define your terms in the context of the theory that you said was appropriate. Whether you use the term “Shannon channel”, “channel in the context of Shannon information theory”, or just plain “channel”, the fact remains that you made the claim that Shannon theory was the appropriate form of information theory for your argument, and that there were limits on the evolutionary “channel”. All of this is just obfuscation, to avoid the fact that you can’t define the channel.
(3) You insist on repeating the same errors. It’s been pointed out to you many times that evolution is not sequential – there can be more than one change propagating through a population at the same time – and you continually babble as though the moment one change starts to propagate through a population all other changes stop.
Sal:
Speaking of saying things for a month, we have pointed out to you that the nucleotide difference isn’t relevant for fixation. Only genes are fixed in a species, and most of DNA is junk.
So you must figure out what the number of genes is that should be fixed for each generation.
Oh, and while I haven’t checked your figure for nucleotides, I can see that you continue to discuss the whole difference between humans and chimps here. But you place that difference on one species, while in fact both species evolve from the last common ancestor.
If you can’t define your channel, you could at least try to define your species and your fixation. You have two species, Sal. Humans and chimps. So you have *two species (or channels, if you can define them)* that independently fix differences in *genes*.
Again I thank Sal for asking us a good question, which has led to an 80+ page draft paper up on another wiki.
I think by “nucleotide” he means “nucleotide pair” or “one third of a DNA codon.”
I politely suggest that he read the (oversimplifed) definition of different kinds of mutation at:
http://scienceblogs.com/evolgen/2007/02/mutation.php#more
and notice that “Genomic rearrangements include events such as fusion and fissions of chromosomes, inversions, and translocations (see figure below). The scale of these events can range from a region as small as a gene (a small part of a chromosome) to large portions of chromosomes to entire chromosomes. Fusion events, such as the one that occurred in the human genome after the divergence with chimpanzees, join together two complete chromosomes, whereas translocations occur when part of a chromosome is moved to another part of the same chromosome or to a different chromosome.”
and
“Genomic information can also be duplicated within a genome. This can occur via various mechanisms, sometimes even aided by viral like sequences moving within a single genome. Entire blocks of genetic material can be duplicated via mechanisms that we’re still working to understand. Another common mechanism occurs when DNA sequences are transcribed to RNA, then reverse transcribed back into DNA and inserted back into the genome. Duplications allow genes or other DNA sequences to explore mutational space that would be inaccessible if they only existed in a single copy. That’s because many point mutations are deleterious, but if there is a copy of a sequence that maintains the original function, a duplicate copy can accumulate mutations that interfere with the original function. Many of these mutations will lead to a non-functional duplicate copy, but some may lead to a sequence with a new function that would not be possible with a single copy because the single copy must maintain the original function.”
A chromosomal duplication can DOUBLE the information in an organism’s DNA in a single step.
This is fundamentally different that the series of point mutations which seem to be the main thrust of sal’s concern in this regard. Many orders of magnitude faster. Macroevolution, not just microevolution.
So these rearrangements and duplications are crucial to proper analysis of the channel capacity (by whatever model of channel, and what is the alternative to Shannon?) which is why I’ve taken such care in such a lengthy paper to define every term, both mathematically, in terms of information, in terms of channel, and in terms of the population genetics model.
I still don’t know the answer. I have to admit, with my open mind, that sal might be RIGHT. I am not married to a hypothesis. I will follow the methodology that I know for seeking the truth. Let the chips (or genes) fall where they may.
Fascinating. I feel I understand the essence of the issue on the table in this thread even without grasping all the math, and yet Sal continues to obfuscate with bogus math.
I smell a (not so) hidden agenda.
BALONEY! Even after 330 posts you still can’t get it right. The physical channel is the evolutionary mechanism. The population genome is the receiver (or storage location), the environment the sender, the evolutionary mechanism the channel. Sheesh.
In a sense the genome can be modelled as a channel as well, but that is not how I’m using it in the context of this discussion.
The channel is the evolutionary mechanism, in whatever way Darwinists mathematically model the evolutionary mechanism via population mechanisms is a mathematical model of the channel (one merely needs to see that 1 fixated nucleotide corresponds to 2 bits). I’ve suggested a few models worth looking at and provided links:
1. Haldane for mammalian evolution
2. Kimura for mammailian evolution
3. Nachman’s model which yielded a paradox
4. Various “solutions” to Haldane’s dilemma in the form of alternative evolutionary mechanisms
I don’t even present them as my models, but your side’s models. I merely critiqued the problems with some of them if you bothered to look at the ongoing discussion over evolutionary speed limits.
Now, Mark, are you going to keep saying I didn’t provide a model, even though I’ve just pointed out I’m merely re-using some of your sides models with a little modification so we can described them as a channel model (with 1 nucleotide serving as the bearer of 2 bits of information).
If you want details and bit rates and issues with each model we can go into that, but you have to stop misinterpreting what I said. You may not like the models. Fine. But quite insisting I haven’t provided you with one. I’ve done so several times. But of course, when after 300 posts you still assert “the idea of the channel being the genome itself” when that is not what I meant, it’s understandable you think I haven’t given you the model. The channel in question is the evolutionary mechanism. I pointed that out several times already, yet you still misrepresent my ideas with, “the idea of the channel being the genome itself”. The genome may be modeled as a channel, but that is not the channel in question.
When I invoked shannon I specifically showed there was a simple interpretation of 1 nucleotide position having sufficient information capacity such that when uncertainty of what is at that position is eliminated, we have 2 bits of information. There was no need to invoike AIT.
I even gave you an IEEE paper where such an approximation was shown to be reasonable.
With all due respect to Sal, we should have been at this point some time ago.
I have been respectful here, both in my words (no name-calling and explicitly thanking him for a good question), and in my deeds (taking his question seriously enough to have drafted a paper over 80 pages long so far).
It was much earlier discussed that there were, broadly speaking, two ways to start defining the channel for evolution by natural selection. (1) population to population; (2) environment to population.
I did some of the math for (1). Now Sal says that he meant (2). That’s okay, if a little late in the game.
But the problem that (2) leads to very quickly is defining mathematically what one means by the “environment.” This is usually done in terms of a “fitness landscape.” This is nicely defined elsewhere on science blogs. But, you see, this begs the question: how do we measure the entropy of a fitness landcape? How do we measure the effect of the fitness landscape on a generation of reproduction and mutation and selection?
There are other questions: is the fitness landscape seen as a continuous multidimensional function, on which a finite population is taking a finite sampling; or is the fitness landscape inherently discrete? If infinite, can it be fractal?
How rough is a fitness landscape? Roughness is what the father of fractals says fractals are really about, in the physical world. How rugged is a fitness landscape, using a term of art popular for decades?
I think that Sal has picked the harder approach, which partly excuses him for not having presented even the first step of a mathematical definition. This choice is the zeroth step. I’m already committed to my paper, which does mention this choice of options.
The set of people who really understand higher-dimensional geometry may be smaller than the set who understand what “evolution” actually means.
There is some important current work in the so-called “genotope” (NOT a typo, I mean “genotope” as in “polytope”, not “genotype” as opposed to “pheontype).
There are publications at a deep theoretical level, and actual (computer graphics) pictures of projections of the human genotope. But I’m afraid that the intersection of the aforementioned sets, i.e. people who really know evolutionary biology AND also know multidimensional geometry, is larger now than when I did my research (1973-1977) which answered question only now being asked, but is still pretty small. Also, perhaps too specialized for this blog.
So, bottom line: (a) Thank you for clarifying at the lowest resolution level what you mean by channel, Sal; (b) this is the harder way to mathematize, I think; (c) the recent exciting specialized literature is known to a very small subset of biologists, and a very small subset of mahematicians.
I have a hunch that (1) and (2) have to ultimately yield the same answer. That is, that the two appropaches to channel capacity for natural selection are, in a deep way, “Dual.”
So where do we go from here?
Oh, in that case, since the largest known single mutation that can be fixed is a doubling of the genome, the channel capacity of the evolutionary mechanism must be at least the number of bits in the genome times the “ploidy number” (probably not the tecnical term, sorry). The largest known genome is 6.7×10^11 base pairs, so the channel capacity is therefore at least 2.6×10^12 bits, assuming diploidy. Now, that of course assumes that the largest genome can be duplicated and subsequently fix. Let’s instead look at the potato genome, which is tetraploid (which means it experienced a single doubling episode). The size of the genome is 1 billion base pairs – note that the size of the genome is the size of one set of chromosomes. That means that the original diploid ancestor had 2 billion base pairs total, or 4 billion bits. Therefore, we know for a fact that the channel capacity must be at least 4 billion bits. Other known species observed to have fixed tetraploidy have even larger genome sizes pushing the lower limit even higher. (At this point, don’t forget we need to factor in number of generations to fix, but I’ve gotta get dinner)
This assumes that Sal has even managed to suggest a correct definition of channel capacity as it relates to evolution. He hasn’t even come close.
Sal:
So now you’re trying to pass the buck?
You are still the person who insisted that Shannon theory was the appropriate theory for your argument. And you have still not given a mathematical definition of what that channel is. You shout, you whine, you insult me, but you still refuse to do what should be a simple thing if you’ve actually ever done the math to figure out if there’s any truth behind your argument.
Of course, the fact that you still refuse to do that, despite all of the attempts to get it out of you, argues rather strongly for the idea that you’ve just been talking out your ass.
Haldane, Kimura, Nachman, or “any of the various solutions to Haldane’s dilema” are not an acceptable answer. First, to my knowledge, none of them actually analyzed the concept of evolution as a communication channel per Shannon theory; and second, they are multiple different mathematical models. So just throwing references around to different people who’ve done some work on various kinds of mathematical models of evolution isn’t answering the question.
You argue that Shannon theory is applicable to evolution; you claim that in terms of Shannon theory, evolution as a communication channel lacks the necessary bandwidth. To make that claim without it being bullshit, you must have done a mathematical definition of the channel that you’re talking about. So why are you so resistant to showing that definition?
Unless, of course, you’ve been talking out your ass the whole time because you never actually did a serious analysis of evolution as a communication channel, and have no clue of how to create a definition of the channel which will support your argument.
W. Kevin Vicklund wrote above, “I’ve seen numerous citations that the maximum observed rate of evolution (in controlled laboratory experiments) is about 5 times faster than the fastest known historical rates (and thus can easily accomodate the fossil record). How this actually compares to your particular question, I’m not sure. But at a (relatively small) population size of 10 million, the maximum number of simultaneous changes moving towards fixation in a sexual species is around 500,000 according to Kimura’s equation. I would be very surprised indeed if the fossil record required that many simultaneous changes, but I am not the person to ask.”
Sal, is Kevin correct about this?
Lagrangian mechanics is Cordova’s latest target.
Lagrange? Surely not as controversial a man as Darwin? Speaking of which:
============
UK Gov boots intelligent design back into ‘religious’ margins
Not science, not likely to be science
By Lucy Sherriff
Published Monday 25th June 2007 12:35 GMT
http://www.theregister.co.uk/2007/06/25/id_not_science/
The government has announced that it will publish guidance for schools on how creationism and intelligent design relate to science teaching, and has reiterated that it sees no place for either on the science curriculum.
It has also defined “Intelligent Design”, the idea that life is too complex to have arisen without the guiding hand of a greater intelligence, as a religion, along with “creationism”.
Responding to a petition on the Number 10 ePetitions site, the government said: “The Government is aware that a number of concerns have been raised in the media and elsewhere as to whether creationism and intelligent design have a place in science lessons. The Government is clear that creationism and intelligent design are not part of the science National Curriculum programmes of study and should not be taught as science. ”
It added that it would expect teachers to be able to answer pupil’s questions about “creationism, intelligent design, and other religious beliefs” within a scientific framework.
The petition was posted by James Rocks of the Science, Just Science campaign, a group that formed to counter a nascent anti-evolution lobby in the UK.
He wrote: “Creationism & Intelligent design are…being used disingenuously to portray science & the theory or evolution as being in crisis when they are not… These ideas therefore do not constitute science, cannot be considered scientific education and therefore do not belong in the nation’s science classrooms.”
The petition was signed by 1,505 people. ®
© Copyright 2007
The Register is owned and operated by Situation Publishing.
Situation Publishing
5th Floor
33 Charlotte Street
London
W1T 1RR
============
I’m impressed – even amazed – that any government is capable of making such a rational decision. Why then do they totally foul up many much simpler issues?
It’s enough to tempt conspiracy theorizing … in this case, a conspiracy that’s actually intelligent and benign.