Mt.
St. Helen's erupted in 1980. As far as volcanoes go, it was a rather
tame eruption but it was one of the larger ones to happen in this
generation. Because of its size and occurrence in our lifetimes,
it's been the subject of much scientific inquiry. Dr. Steven Austin,
a creationist and PhD geologist, collected rock samples formed in the
eruption and had them tested using the potassium/argon dating method.
The results on different samples gave ages between .35 (+/- .05) and
2.8 (+/- .6) million years. The known age of the rocks was 10 years
old.
The
fact that accepted, “scientific” dating methods failed to assign
the correct age to the rocks should cast doubt on the ages assigned
to rocks of unknown age. However, evolutionists cried foul. Mark
Isaak, on the website, Talk
Origins, said:
Briefly,
Steve Austin collected a sample from the Mount St. Helens lava dome,
known to be ten years old then, and sent it to a geochronology lab
which tells people very clearly that the methods they use cannot give
accurate dates on samples expected to be less than two million years
old. In other words, Austin deliberately arranged for the dating to
be invalid and then pretended it was someone else's fault.
I
thought Mr. Isaak's response was a little vague. He did not spell
out exactly why the lab cannot give accurate dates on recent
examples. He did provide a link to a site that explained young
samples should not have enough 40Ar present to be detected. The fact
of the matter was, though, that Austin's samples did
have detectable amounts of argon and thus yielded ages much older
than the actual ages of the samples. I wrote to TO and expressed my
disagreement. Here's a quote from my letter:
Mark
Isaak's response to Harold in September's feedback was grossly
misleading. Mr. Isaak stated that evolutionists' dating methods
"cannot give accurate dates on samples expected to be less than
two million years old." He does not explain that the reason is
that there SHOULD NOT BE enough of the daughter element present to be
detected. In the link provided in the response, Dr. Henke states, "A
few thousand years are not enough time for 40Ar to accumulate in a
sample at high enough concentrations to be detected and quantified.
Furthermore, many geochronology laboratories do not have the
expensive state-of-the-art equipment to accurately measure argon in
samples that are only a few million years old." This is a real
problem for evolutionists. 1) If a rock of unknown date tests to be 3
million years old, how can we be sure it's not only 50,000 years old?
By your own admission, accurate dates cannot be given for samples
under 2 million years old. 2) If the world truly was created only
6,000 years ago, you must acknowledge your dating methods would be
WORTHLESS in trying to establish that.
In
reply, Chris Stassen of TO quickly moved the goal post, saying, “'Not
able to give accurate dates' generally means that the range of
uncertainty swamps the measured age. It does not mean that any
arbitrarily old age will result. For example, an age of 0.5 ± 1
million years is not considered either accurate or terribly useful,
even though it is correct.”
I was tempted to point out that the range of uncertainty swamping
the measured age didn't happen in the case in question but I let it
go.
My
exchange with Talk Origins happened in October, 2006. So why am I
bringing this all up now? I guess there are a couple of reasons.
First, it's still relevant to the debate because secular scientists
still resort to these same arguments whenever their tests fail to
accurately date rocks of known ages. But more than that, I recently
came across a funny video that uses a perfect analogy to drive home
these very points.
Ian
Juby hosts a periodic show on YouTube called, “Genesis
Week.” His humor is a little campy but, overall, I find his
videos interesting. The full video (self titled, Rant #100), can be
viewed here
but I've edited it down to the relevant section below.
Isn't
that a hoot? He echos the very points I've made before but his glass
of water analogy really nails it. Secular dating methods don't give
“no date” for rocks of known origin – they give erroneous
dates which are much older than the actual date. How then can we
have any confidence in the dates assigned to rocks of unknown age?
There
are at least a dozen assumptions that must be made when radiometric
dating is being used to determine a rock's age – none of which are
testable. One assumption, for example, is that none of the daughter
element is present in the sample at its origin (or at least that the
exact parent/daughter ratio can be known). In science, nothing is
really ever proven “true” but some things can be proven false. I
believe this particular assumption has been proven false. What then
of the other assumptions? Why should I believe any are valid?
The
fact of the matter is that I don't. I don't see why any reasonable
person would. But then again, we are talking about evolutionists.
10 comments:
The decay product measured, when doing potassium-argon (K-Ar) dating is argon-40.
Argon makes up approximately 1% of the atmosphere. This is the same isotope (argon-40) that is produced by the decay of potassium-40 (the radioactive isotope) (in contrast to, e.g. various forms of uranium-lead dating, in which different isotopes of uranium yield different isotopes of lead)
K-Ar dating involves heating a sample, in a vacuum chamber, to release the stored argon, which is then measured.
The vacuum chamber does not contain a perfect vacuum; a tiny amount of residual air remains. One percent of that residual air is argon.
Therefore, when one measures the argon expelled by the heated sample, one also measures some argon that was already in the chamber to begin with. This is a very small amount of argon, to be sure, but when one is dealing with small samples and an isotope whose half life is over a billion years, a small amount can give a significantly false reading (with older samples, the amount of argon in the vacuum chamber at the start is swamped by the amount released by the sample).
So the make the analogy accurate, Juby's rant should have been trying to measure the water added to a glass already half full of water. Obviously, if you add a drop or two, you won't be able to tell whether you added nothing or several drops; if you add a liter or two, the water already in the glass won't make much difference (except that the glass will overflow, so start by emptying the half-full glass into an empty gallon jug).
This exact problem does not carry over into other forms of radiometric dating, though other methods have their own limitations. There are of course methods that are appropriate for measuring the ages of much younger things (radiocarbon dating, for example, is usually useful only up to ages of ca. 30,000 years, and can't tell the difference between a sample 30,000 years old and one thirty million years old, though a few highly sophisticated labs can stretch the usefulness of C-14 dating up to 100,000 years or so).
Isochron dating -- basically, comparison of multiple parent-daughter isotope pairs with parent isotopes of different decay rates -- can serve as a check on possibilities ranging from the initial presence of some of the daughter isotope to changes in decay rates (what is going to change multiple decay rates by the exact same fraction?). So it is not in fact true that there are no checks on the assumptions used in radiometric dating.
Of course, I suppose the reasonable thing to believe is that God, for reasons unknown, magically changed all decay rates in perfect lockstep with one another. for no particularly good or obvious reason. Indeed, I have argued in the past and propose again: the very existence of radiometric dates in the millions or billions of years utterly refutes young-earth creationism even if we do not assume these dates are accurate. An all-powerful Creator could trivially easily have arranged for radiometric dating to infallibly yield results consistent with (obviously, long-half-lived isotopes couldn't distinguish between millions of years ago and yesterday afternoon, but would not indicate dates of billions of years) an Earth only ca. 6000 years old, and a Creator Who wished His creation to bear testimony to His (recent) creative acts would have so arranged it. This is not what we see. Therefore, young-earth creationism is false.
Steven J. I think your argument about the argon-40 isotope in the air is invalid as any scientific method would subtract from it's results the initial status of whatever it is measuring. For example, if I want to measure the mass of the water in Juby's glass, You would measure the mass of the glass before pouring the water on it and then subtract the mass of the glass to the total mass.
A good dating method would run an empty trial right before running the test so you would know what you have on that particular day and time. That way you can subtract the "empty air stuff" to the sample results.
Regarding the actions of the Creator, there are plenty of evidence about His works, but every time some new discovery points towards him, the theory of evolution, or cosmology, or big bang or whatever changes to accommodate the new evidence. Of course it would have been easy for him to make radiometric dating up to 6,000 years old, but then evolutionists would have make up an excuse to say why 6,000 years is not accurate (like the meteor that impacted Mercury to make it heavier than expected).
Wouldn't have been easier for God to write in the sky his Ten Commandments in letters of fire than dealing with millions of people who doesn't believe in him or doesn't love him? It is by faith. He has given enough evidence, but He will not give any demonstrations (like the pharisee asked Jesus for demonstrations).
Bless you,
Josue
Okay, you do know that tricorders haven't been invented yet? I don't think there's any method of measuring the amount of argon in a chamber that doesn't involve taking the argon out of the chamber, or otherwise altering the contents of the chamber -- which means that the next test uses a different soft vacuum with a different amount of argon. You can't know in advance how much to subtract.
No method is known of altering decay rates by multiple orders of magnitudes that would not destroy whatever sample one is trying to date (nor is any method known that changes the decay rates of multiple isotopes by the exact same multiple). So the attempt to explain away such results would be somewhat harder than noting that the solar system is full of rocks that can hit things.
But in any case, there is no such thing to explain away. Fantasies about how scientists would react if someday scientific discoveries started to confirm Genesis (a high density for Mercury doesn't count, since the Bible says nothing about planetary densities -- and indeed implies that planets ought to be attached to the solid dome of the sky a few miles above our heads) are not refutations of evidence.
I'm sorry it took a while to respond. I was off enjoying the holiday. I hope you both had a great thanksgiving as well.
Josue has already expressed some of my same concerns – namely, the alleged inability to mask out any background argon from dating results. When you go to buy vegetables, have you ever noticed the scales in the supermarket always start at zero even though there is a basket hanging below them? I'm sure the basket weighs something. However, the scales are calibrated to account for the basket, otherwise, we'd have to add the weight of the basket every time we buy vegetables. It sounds to me like Steven J is admitting to a permanent thumb on the scale in every case of radiometric dating.
I'm curious about a couple of things. First, why the constant zigzag in explaining why the tests failed to provide the correct age? Here are the responses in the order I've heard them:
Steve Austin collected a sample from the Mount St. Helens lava dome, known to be ten years old then, and sent it to a geochronology lab which tells people very clearly that the methods they use cannot give accurate dates on samples expected to be less than two million years old. In other words, Austin deliberately arranged for the dating to be invalid. [A vague response which only says the results will be invalid without explaining why]
A few thousand years are not enough time for 40Ar to accumulate in a sample at high enough concentrations to be detected and quantified. Furthermore, many geochronology laboratories do not have the expensive state-of-the-art equipment to accurately measure argon in samples that are only a few million years old. [Henke's blanket statement which also doesn't explain why Austin's test did reveal testable amounts]
Not able to give accurate dates' generally means that the range of uncertainty swamps the measured age. It does not mean that any arbitrarily old age will result. For example, an age of 0.5 ± 1 million years is not considered either accurate or terribly useful, even though it is correct. [Stassen's reply to my feedback on TO. Does not apply to Austin's samples so not sure why he said this except perhaps to obfuscate.]
Argon makes up approximately 1% of the atmosphere.... K-Ar dating involves heating a sample, in a vacuum chamber, to release the stored argon, which is then measured. The vacuum chamber does not contain a perfect vacuum; a tiny amount of residual air remains. One percent of that residual air is argon. Therefore, when one measures the argon expelled by the heated sample, one also measures some argon that was already in the chamber to begin with. [Steven J's response to my blog post]
Do you see what I mean? The objection Steven brought up actually sounds reasonable but it wasn't mentioned by anyone prior. They offered a host of other excuses first which makes the whole discourse sound like smoke and mirrors. They're throwing mud at the wall until they find some that will stick.
Continued...
There are a couple of curious things I see but I might save those for a future post. I'd like to make a couple of other points in response to Steven J's remarks.
The earth (indeed, the universe) is only around 6,000 years old. Therefore, there are no rocks anywhere that are millions or billions of years old. Science has only blossomed in the last few centuries so even with a “short” history of only 6,000 years, science has not been around long enough to witness the vast majority of it. We can measure processes that are occurring now, but we cannot extrapolate those back in time without making some assumptions. In the case of radiometric dating, a few assumptions are:
At its origin, the rock had none of the daughter element present
During the existence of the rock, none of daughter element has been added by any other process
During the existence of the rock, none of the daughter element has been removed by any process
During the existence of the rock, none of the parent element has been added by any process
During the existence of the rock, none of the parent element has been removed by another process
The isotopes have decayed at a constant rate over the existence of the rock
I'm sure there are others but these alone are critical. If the daughter element was present at the rock's origin, the age will be inflated. If any other process added the daughter element to the rock, the age will be inflated. If any other assumption proves to be untrue in reality, then we cannot accurately date any rock. The confidence secular scientists have in their dating methods is merely a strong belief akin to faith.
Finally, Steven mentioned c-14 dating. I'm sure he's familiar with the RATE project. One of the findings of the RATE project was that c-14 is virtually ubiquitous in every fossil tested. Dinosaur fossils, coals, petrified wood, even diamonds – all of which are alleged to be millions or even billions of years old – still have detectable amounts of c-14 in them. As Steven pointed out, there should be no carbon left after 100,000 years or so. The implication is that none of these samples are millions of years old. Of course, evolutionists play the same zigzag game to explain why 1,000,000 year old diamond (the hardest known natural substance and nearly impossible to contaminate) really doesn't have any c-14 left in it.
Thanks to all for your comments. God bless!!
RKBentley
It is one thing to calibrate scales in grocery store produce departments -- the scale weighs the same every time it's used. It's another to calibrate the amount of argon in a vacuum chamber, when the amount of air left in the chamber after as much as possible has been pumped out varies every time one pumps out the chamber.
And this is the reason very small amounts of argon released from a sample cannot be accurately measured; it's the reason there is a minimum age for which K-Ar dating is valid. I cannot help what other sources you have consulted have or have not mentioned.
Carbon-14 is produced, at least in the upper atmosphere and probably in many mineral samples, by the effects of cosmic rays or other radiation on nitrogen-14 (atom bomb tests raised the level of C-14 in the atmosphere slightly). So you have the opposite problem from the K-Ar problem: rather than a natural source for trace amounts of the decay product, you have a natural, recurring source for trace amounts of the parent isotope. Radiometric dating, alas, works by physics, not by magic, which does not mean that its limitations cannot be worked around.
The universe is ca. 13.8 billion years old; the Earth a relatively paltry 4.54 billion years old (with a margin of error of ca. 1% of that age). Again, if we take your "science has not been around to measure such vast ages" argument seriously, we'd have to close down every medical examiner's office, every arson investigator's shop, every NTSB crash scene investigator, since they all investigate past events that they did not, personally, witness. I think you might find such a devotion to the "were you there" school of epistemology a trifle excessive.
As noted, it is often possible to compare ages yielded by multiple pairs of parent-daughter isotopes in the same rock. This serves as a check on the possibilities that some of one isotope has leached into or out of the sample since it was formed. So it is false that these are arbitrary and uncheckable assumptions.
Steven J,
I didn't say you should be able to calibrate your “scales.” I'm saying you're admitting every radiometric age is inflated by background isotopes. Thus I said, it is a permanent thumb on the scale – we just can't be sure how much is being added because of the thumb.
But the objection you raise is sounding less and less reasonable in explaining why the tests assigned millions of years to rocks that were really only 10 years old. Henke said, “Furthermore, many geochronology laboratories do not have the expensive state-of-the-art equipment to accurately measure argon in samples that are only a few million years old.” You're talking about a sample that should have no argon-40, in a chamber that has been nearly completely evacuated of air (maybe not entirely evacuated) still having enough detectable argon to measure millions of years old. It's Juby's water analogy all over again. There is actually only a tiny drop of water in this glass but because it's so small that our equipment can't quantify it so it just looks full.
Finally, I've discussed the difference between “studying “ origins (origins cannot truly be studied scientifically) and forensic science. We might not have observed a particular fire being set but we can set another fire and compare the results to the fire of unknown origin. Fires, murders, burglaries, car accidents, etc, occur ever day. We can't recreate a particular event but we can compare unseen events to observed events and speculate about what happened.
We CANNOT observe what a billions of year old earth looks like! “Age” cannot be observed. Even among people, we cannot “observe” how old they are. We can observe things like wrinkles, sagging skin, gray hair, a stooped posture, and compare them to the features of people whose age we know. Some people look older or younger than they really are. Finding an official record of when they were born will give you a more accurate measure of their age than studying them physically. We have never seen what a 4 billion year old earth look likes but we do have a reliable record of its age.
God bless!!
RKBentley
Actually, I did not admit that every radiometric date is affected by "background isotopes." It seems to me that I came closer to implying that this is a problem only with K-Ar dating, although I did not actually say so because I am not sure that is actually true.
Again, if you have a radioisotope whose half-life is over a billion years (unlike the under-6000-year half life of C-14), then over a million years (a twelfth of a percent of its half-life) one would expect only tiny amounts of the decay product from a typical-sized sample.
As near as I can determine, your argument about the distinction between ordinary forensic sciences and "origins science" is that we cannot be sure that the observed laws of nature continue to operate over spans of time longer than recorded history. But why should they not? You are very close to invoking miracles (e.g. systematic drastic changes in decay rates, or omphalism on a titanic scale) not mentioned in the Bible to explain why we don't see geological evidence of the miracles that are mentioned in the Bible.
Steven J,
You said, “Actually, I did not admit that every radiometric date is affected by "background isotopes." It seems to me that I came closer to implying that this is a problem only with K-Ar dating, although I did not actually say so because I am not sure that is actually true.”
I'm sorry but I'm beginning to lose you. You may not have used the words “background isotopes” but you were talking about argon in the air being detected in the sample, right? Is that a problem or not? Now you're saying it may not actually be true? I don't follow.
Also, how can this only be a problem with potassium/argon dating and not other dating methods? It sounds somewhat like special pleading.
You said, “Again, if you have a radioisotope whose half-life is over a billion years (unlike the under-6000-year half life of C-14), then over a million years (a twelfth of a percent of its half-life) one would expect only tiny amounts of the decay product from a typical-sized sample.”
Again, Juby's rant said, 'There is actually only a tiny drop of water in this glass but because it's so small, we get the false impression the glass is full.' That's exactly what it sounds like you're saying.
You said, “As near as I can determine, your argument about the distinction between ordinary forensic sciences and "origins science" is that we cannot be sure that the observed laws of nature continue to operate over spans of time longer than recorded history. But why should they not? You are very close to invoking miracles (e.g. systematic drastic changes in decay rates, or omphalism on a titanic scale) not mentioned in the Bible to explain why we don't see geological evidence of the miracles that are mentioned in the Bible.”
Not at all. I'm comparing events that we can repeat and observe to events we cannot repeat and did not observe. Fires, murders, robberies, accidents, and etcetera happen every day. We maybe didn't observe a particular event but we can compare what is found at an event we didn't observe to events that we did observe. The Big Bang, abiogenesis, dinos-evolved-into-birds, light reaching us from the edge of the universe, and billions of years of radioactive decay cannot be repeated and was not observed. I think the difference is obvious and you're merely being obtuse when you claim you don't see it.
Why is it that secular scientists stand confidently on uniform processes when it helps their theory (as in radiometric dating) yet raise hosts of variables against uniformity when it hurts their theory (as in lunar recession or the accumulation of minerals in the oceans)? Can you give me your word as a lover of science that in the entire history of any given rock, you are CERTAIN that
> no daughter element was present when the rock formed,
> no daughter element entered or left the rock by any process other than radioactive decay,
> and no parent element entered or left the rock by any process other than radioactive decay?
Never mind if the decay rates have changed. These other assumptions would vastly affect the dates yielded when the rock is tested. If your methods cannot be shown to accurately date rocks of known age, why should I have any confidence when you tell me how old your tests show a rock of unknown origin to be?
God bless!!
RKBentley
Regarding you beginning to lose me, I apparently misunderstood your comment that "every radiometric age" was affected by decay product already present; you meant "every K-Ar date," while I thought you meant "every date, regardless of parent-daughter isotopes used."
Regarding age inflation by background isotopes, actually, with some forms of uranium-lead dating, this was a problem, back in the bad old days of leaded gasoline (and hence, leaded air). It's not intrinsic to the methods, though: lead and uranium, for example, have different chemical properties and form different compounds in nature, so there isn't automatically lead in new-formed uranium ores (and different isotopes of uranium, again, decay into different isotopes of lead, so radiogenic lead can be distinguished from intrinsic lead that might be present).
Regarding Juby's analogy, he extrapolates from the fact that we can't tell if one drop has been added or not (and that is why K-Ar dating is not used for samples thought -- or known -- be be recently formed) to the inference that therefore, we can't tell whether a gallon of water has been added or not.
Regarding how we can infer large amounts of evolution when we have observed only small amounts, we know how light works. We know how inheritance, mutation, and speciation work. Shared genetic variants are routinely accepted as indication of common ancestry in humans (e.g. paternity tests, or figuring out whether Polynesians are descended from south Asians or South Americans), and work the same way in indicating relationships and degrees of relationship among species. It is worth noting that young-earth creationists in general accept that, e.g. horses and zebras derive from a common equine ancestry, but while scientists have observed speciation, they have not observed it resulting in that degree of genetic differene (which exceeds that between humans and chimpanzees).
Regarding why scientists assume that radiometric decay rates cannot vary while tidal forces can, multiple attempts to alter decay rates by various means have produced only trivial changes, unless means are used that would reduce a rock to radioactive plasma and leave us no sample to test. Furthermore, decay rates are linked to quantum-physical properties of atomic nuclei; to change decay rates, you'd have to change (undetectably and without wiping out all life on Earth by changing all chemistry) properties described in the most intensively-tested theory in physics. Meanwhile, tidal forces affect the Earth's rotation most strongly in shallow seas (the Bering strait counts more than all the deep oceans combined). Change the distribution of water on the Earth, and you must change tidal forces (note that the underlying laws of physics do not change, only the situation they are applied to).
Regarding absolute certainty, do you understand your own argument? You are asking that we reject multiple lines of evidence (stratigraphy, multiple different forms of radiometric dating, faunal succession in the fossil record) etc. if we are less than 100% certain that no other causes could explain the results we obtain, while accepting your interpretation of the Bible if we are not 100% certain that Genesis 1 - 9 is a collection of myths like any other. That is a trifle unreasonable.
Also, geologists do understand a bit about chemistry, and the various ways in which isotopes can leach into and out of rocks. It's not as though rocks change composition routinely, capriciously, and incomprehensibly.
Post a Comment