Saturday, 9 April 2016

The LNT fraud

This is an interview between Atomi Per la Pace, and Dr Edward Calabrese on the history of dose response. In particular how it came to be assumed that no safe dose was the Word Of Science for all carcinogenic substances (for example: radiation). The recording quality is not brilliant which is why I copied the transcript over from youtube. It's also long, at 1 hour 26 minutes. Why is it here? Probably because I like to keep references which can be easily linked to. It's easy to set IDs inside the text below, and search it, so that I can link to a precise point in the interview. Much harder to do those searches and make links with youtube. Lot's of additional information and links are at the video description.

The interview transcript below differs slightly from the youtube transcript. The main difference is I've tried to pull paragraphs and sentences together better, so there are fewer timestamps interrupting the flow. Neither are perfect, word for word, representations. Both (this and the original youtube transcript) are honest in that they put over the original meaning well:

You may not care to know it but this is really a subject affecting our lives in many ways. No safe dose hasn't only been used to kill nuclear power. It's a general regulation for all substances.

APLP: Hi everyone I'm Carlo Pettirossi and I'd like to welcome you to an interview offered by Atomi per la pace. This is actually our first interview, and I'm particularly proud to introduce on behalf of all our staff our exceptional guest, Dr. Edward Calabrese, Professor of Toxicology at the University of Massachusetts. Prof. Calabrese, thanks very much indeed for being with us.
Thank you very much. I appreciate it.
APLP: Before letting you introduce yourself and your work, I'd like to give a very brief introduction to the viewers. Atomi per la pace knew about part of your work thanks to a junkscience article found on Rod Adams' blog atomicinsights, and the link of which (as well as detailed information about you and other links) can be found in the description box. By the way, Rod Adams wrote several other articles about your work - the latest of which is dated March 28 and talks about the presentation given at Cato Institute on March 21. The viewers will find the link to that article too in the description box. This junkscience article - which contains links to two of your papers (both issued in 2011) - talks about the LNT model, which is used by every nuclear regulatory commission worldwide as the radiation dose-response model to follow in order to determine the upper limits within which the human body can be exposed during a certain period of time without permanent consequences.
APLP: Well...according to your studies, the LNT is nothing else but the result of a scientific fraud. But it gets worse than this. In fact, the main character in this fraud - Hermann Muller - got a Nobel Prize for it. I guess it's time now to leave the talking to our guest. Prof. Calabrese, please introduce yourself and your area of expertise;
My name is Edward Calabrese and I'm a professor of toxicology at the University of Massachusetts, school of public health in the environmental health sciences division, and my area of expertise is toxicology which I'm certified. I'm at the university of Massachusetts since 1976. I'm a traditional faculty member, where I teach three courses per year and I have an active research program. I have been very active in the area of dose-response since 1985, and prior to that I spent ten to twelve year with the particular focus on studying intra-individual variation in susceptibility to pollutant toxicity and carcenogenicity trying to find out why some people get sick and others don't.
That led me to looking at animal models [...] things about nature and eventually it all led to the better understanding of the dose-response relationship.
APLP: I'd like to invite the viewer to consult your page on the UMASS site for further details about you. First question: can you please tell us how and why you came up with the idea of criticizing the author of the LNT and his model?
Well it happened by accident. It really happened as a result of the fact that I had written a manuscript - that I was submitting for a publication - concerning historical developments of the LNT and I was going to submit it to a leading toxicology journal.
As a standard mode of operation for me, when I get to a certain point in the manuscript development, I would oftentimes send it out to a number of people (friendly critics) who would evaluate it and give me their honest criticism before I would submit it for a publication consideration to a journal. I sent my manuscript to a colleague who was extremely well known in the area of toxicology. This particular person came back with a specific but somewhat general criticism and he indicated to me that I needed to address the role of Dr. Hermann Muller more insightfully especially in the area of the history of linear dose-response. That draft of my manuscript was not sufficiently informative. I agreed with his criticism and realized that even though I had read some about Muller, I hadn't dug out his life well enough. I then got many of the papers that Muller wrote - which were a considerable amount in his career - and then also re-reading his biography and studying his Nobel Prize lecture. That led me to wanting to know more about Muller and the details of his involvement with other leading radiation geneticists at the time. I was able to get a purchase from the University of Indiana where Muller papers are held. Hundreds of letters he wrote and other type of correspondence.
And then I went to the American Philosophical Association - I think it is in Philadelphia - and purchased other kinds of communications that other people had - and that weren't contained in Muller's own files. I had a substantial quantity of information.
In the course of this, I learned a lot about Muller, and I was really trying to understand the switch from a threshold model that was very dominant up until the mid 1950s to a linear model for a cancer risk assessment. I learned that Muller was very strongly advocating linearity during the 1930s and 1940s, during his own life but he wasn't having a lot of success in convincing the regulatory community and the governmental agency to adopt his views even thought other people within his field - radiation genetics - [agreed] in general with his views of linear dose-response relationship. These studies were not particularly convincing or clarifying going into WW-II, during which the US created a program called the Manhattan Project which was aimed to produce the atomic bomb.
One of aspects of the Manhattan Project was the better understanding the nature of the dose-response in low dose zone - essentially for X-rays, gamma rays and radionuclides. So what happened is that the US government and the atomic energy commission gave a grant to the University of Rochester. One of the recipients was a well known geneticist - Curt Stern - which was conducting his studies on dose-response using the fruit fly model to try to answer the questions on the nature of the dose-response in the low dose zone.
At that time, Muller was a professor at the Amherst College (a mile away from where I'm sitting today). He was retained by the Manhattan Project at the University of Rochester through the actions of Curt Stern to be a consultant to that project.
Muller provided lots of information. He provided one of the strains of fruit flies; he was very helpful in telling the scientists what kind of study design they should have; he was extremely important in attempting to resolve questions on the control group and spontaneous mutation rate; he reviewed manuscripts for publication... Muller was very involved in this research activity. Sorry for the long story, but this leads to an answer, and that is that in the course of this research two large studies were to be done:
one was an acute exposure to X-rays to the fruit flies in a very structured period of time. In that particular study, with Curt Stern and Warren Spencer (a well known drosophila geneticist), they showed that there appeared to be a linear dose-response relationship.
However this dose-response question was to be resolved with a chronic exposure study conducted by Ernst Caspari, another well known researcher was working in Stern's team. The chronic study consisted in exposing the fruit flies to a dose rate which was 1/13000 of the dose rate administered by Warren Spencer in the acute study. Therefore a very different type of study which also incorporated a number of methodological improvements compared to what was done by Spencer. When Caspari got his findings, he went and shared those with his superior (Stern) and his data didn't support a linear dose-response relationship. They actually supported a threshold dose-response. Stern didn't want to accept Caspari findings, and challenged him by claiming that his control group was aberrantly high. This is why his findings led to a threshold interpretation rather than a linear one. Caspari decided to dig into the literature and found a number of studies which did not support the position of his superior Kurt Stern, but they supported the reliability of his own controls. Then Curt Stern contacted Hermann Muller and asked "can you share with us your data because you have been looking at the question of the spontaneous control group mutation in the same model that we've been using?". Muller provided copious amounts of data to Stern. It turned out that Muller's data supported the interpretation of Caspari. In letters between Caspari and Stern (and between Muller and Stern), it's shown that Stern kept backing down reasserting the original interpretation of Caspari (the data supported a threshold). When I was going through this, ... I'm jumping ahead in the story a little bit ... at that point I wasn't sure just how much Muller really knew of the conversations between Caspari and Stern (I was to learn later how much he actually knew).
However, what I was wondering about is that Caspari showed in his excellent study that the data supported a threshold - and this was the strongest study, by far, that had been conducted to that point. I also recalled ...this was August-Semptember-October 1946. In December 1946, Dr Muller receives his Nobel Price and gives his speech over in Stockholm accepting the Nobel Price. What he says in his speech, is that one can no longer accept the use of a threshold model. This needed to be replaced by a linear model. He really lays this out in very strong and definitive terms that the threshold is really intellectually and scientifically dead and it needs to be replaced by this linear model. Having read and studied his comments, I then said to myself:
"I know he was consultant to the Manhattan Project; I know he communicated with Stern and the investigators. But had he really read Caspari's manuscripts that showed that data supported a threshold - in fact Caspari was advocating for a tolerance dose in the manuscripts".
And I said: "I have consulted in a lot of studies in the past and sometimes people share with you information, and sometimes they don't show everything". So I didn't really know what he knew. I said: "did he really see the paper before he gave his nobel price speech?".
So..I had some doubts. In the course of going through all the letters that was sent back and forth between Muller and Stern and others, I found out that on November 6, 1946 Stern sent Caspari's manuscript to Muller asking him to review it.
Muller receives and answers Stern on November 12 saying: "I've received the manuscript of Caspari. I have pretty much skimmed it over, went over quickly. I can see that this is a serious challenge to the linear dose-response model. This study needs to be replicated.
I don't have any reason to doubt Caspari's capabilities to do a proper study. I'll get back to you with detailed comments before my trip to Europe [to get his Nobel Price]". So, at that point, as far I was concerned, he was aware of it, he knew what the implications were, he had some sense as to the credibility of the people he was consultant for. But I was waiting for his "major" review. Then he goes to Stockholm and, as I mentioned before, there he basically goes headlong in the opposite direction asserting that there is no possibility of a threshold! None whatsoever! And yet he had actually seen that there was a very strong challenge to a threshold and it had to be replicated. Now...replication of this kind of studies is not trivial!
It involves at least a year - maybe more - lots of money, big sets of expertise. And you don't want to waste anybody's time, money and everything if it's not significant! But it was significant enough for him to say that. What he should have said, in my opinion, is:
"there is still some uncertainty; we have to do more research to resolve the question of the dose-response relationship...". There wasn't enough data to say you can no longer consider the possibility of a threshold.
He basically - in my opinion - was misleading the audience!
Now people could say to me: "well, maybe he changed his mind between November 12 and December 12". But I was lucky enough to get a January 14, 1947 letter from Muller to Stern in which he gave his detailed evaluation of Caspari's results.
In it - it's almost an opening statement - it says: "there's almost nothing I can add to this. It's a very well conducted paper". He reasserted that the linearity was challenged. The research was so significant that it needed to be replicated as soon as possible.
He didn't necessarily believe at all that the threshold was the correct interpretation. But that's why you do the additional studies! The studies had to be done to resolve these questions! He didn't have any technical issues. Basically the issue was:
"let's get this replicated". So, as far as I was concerned, what happened was that Muller's opinions had not changed. He reaffirmed his original opinion. And he raised further questions to me: "how could he have made his statement to the Nobel Prize committee?
And knowing what he knew - and his opinions didn't change -'s private he was revealing on his true feelings to Stern, but in public he was giving a different story". He was really, in my opinion, misleading, deceiving and being dishonest to the public.
While in private he was being, you know?, a good scientist! He really can't have it both ways. So I didn't go into criticizing Muller because I had any axe to grind with Muller, because I had any issues with him or an historical problem with him. I actually ended up criticizing Muller more or less as an unexpected by-product of trying to produce a better paper. So...sorry for the long story, but that's how I uncovered, so to speak, the beginnings of this controversy over how we transformed from a threshold to an LNT. And I think it really started with this major act of deception and this major public display of scientific aggrandisement, and...where we put one of our major achievers on display and try to learn from him. You're expecting that he's telling you the truth! And he wasn't!
Ok! Next question: can you give us some more details about Muller's and Caspari's experiments?
Yes I can tell you about it. There were two experiments done: the acute and the chronic study. I think this is important. Then I'll get back to your question. The reason why Kurt Stern challenged Caspari is that he and his radiation geneticists community really believed, or wanted to believe, very strongly in a linear dose-response relation. And Caspari's data did not support it. First there was the challenge that Caspari's control group was wrong - and Caspari answers that question. But there is a very interesting thing here: if you read the Caspari paper (with Stern as a co-author) on this topic published in 1948, almost the entire six page discussion was a disclaimer such as "even though our data is what it is, please don't accept and use it until you or we can explain why our findings differ from the earlier acute studies done by Stern and Spencer". It's interesting that Stern forced the chronic study trying to explain why it differed from the acute one - not the other way around ("you can believe Spencer's data but you couldn't Caspari's data"). Beyond that, about 25 methodological differences between the acute and the chronic studies were identified. For instance, in the acute study they used x-rays and in the chronic studies they used gamma rays.
In the acute study they gave a direct exposure to the males, while in the chronic study the copulation had taken place therefore the exposure was actually given to the sperm as it was stored in the females. The organisms receiving the exposure were different.
The diets were totally different. The females used by Caspari received a diet which would prevent the laying of eggs and a totally different diet in the other case. There were then 25 differences between these studies such that you actually couldn't go back and ever figure out exactly why one study would have differed from the other because you had too many simultaneously differing circumstances to resolve. Usually in experiments, you keep everything constant except from one variable. In this case, there were 25 differences between the studies! Stern is a very significant gentleman. He's got a lot of experience. Muller has as much experience - if not more than Stern - and Caspari is very talented himself! They all knew that you could not go back in and resolve these differences. But nonetheless, in this paper that's what they are telling the reader to do. So it made no sense to me. Even looking back after so many years, that these people, who were really outstanding individuals - straight intellects.
All of them - could ever have written this and that anybody else would have believed them. The interesting thing is that, after Muller read the paper - Muller's name was added to the paper as a consultant - the only other change that happened in the manuscript was due to his influence: Caspari and Stern removed every reference of this being a threshold phenomenon. This was like a minor change in a sentence but it removed the key word - tolerance THRESHOLD dose-response. The interesting thing that happens along this line, in my opinion, is that you know that in some point in time this is going to be revealed (Caspari's data being published and supporting a threshold, while Muller was telling the world's audience that there was no possibility that a threshold could ever exist).
Him having seen Caspari's data - and having been a consultant to it - has a possibility of challenging his credibility as a scientist and a Nobel prize winner.
Muller's recommendation that Caspari's study would have to be replicated actually took place - at the University of Rochester. Caspari was applying for a new job. Spencer was going back to his old job. So they needed a new person to take over and do the actual work.
They got a young lady, Delta Uphoff, who was a new graduate student. She came in to work with Stern and replicated some of Caspari's work. In her first experiment, she share her data with Stern, and her control data was reading significantly below what was expected (about 40% below what was reported in the literature and more or less 40% with respect to what Muller's data had shown). Stern and Delta Uphoff decided then that their findings didn't have credibility. They wrote their manuscript based on this experiment.
In it, they claimed that their data were not interpretable because her control group was aberrantly low. They cited Caspari's work and the literature. They also cited Muller and thanked him for submitting his data for them to review.
He was specifically acknowledged as thanking him for allowing them to use his data. He actually had a positive affirmation that he knew what was going on. By the way, in the discussion of their manuscript, the writers (I suspect it was Stern, but I have never seen this before) asserted that one reason why they might have had the aberrant findings was in fact due to investigator bias. I suspect he was referring not to himself but to Delta Uphoff. That was never clear, but there were two names on the paper.
She then went ahead trying to do a second study and she had again the same problem with the control group - another aberrantly low value - and they couldn't use this data either (it was still the "uninterpretable zone").
She then did a third experiment where the results of the control group values appeared to be normal to me. But the low dose that she was studying at, if you were looking at this from the point of view of a linear dose-response, the values were 3 to 4-fold greater than what would occurred with a linear prediction. And so when one looked at this, it would have appeared that her low dose-response was aberrantly high. It appeared that Delta needed more experience in doing these kinds of research.
And essentially the findings were, in my opinion anyway, that many experiments have a lot of credibility. We're in the situation now where Stern is trying to get this work published. He takes Spencer's paper on the acute study already published in his Journal of Genetics as well as Caspari's paper. He then adds the three Delta Uphoff's experiments. He rolls them altogether in a type of what I call "a modern day meta-analysis" and he tries to make sense of them. He presents this only in a single table in a one page paper in the journal of Science, saying, more or less, that the two studies with aberrantly low control groups are normal! He now can interpret them but he doesn't share with the Science audience that less than year before they were uninterpretable (that their results were aberrantly low). He doesn't go back and say that the data are changed and become normal - because the database had not changed - and in every study that came afterwards, he'd actually reassert the aberrantly low nature of the control group in those findings.
In this Science paper he also claims that the control group of Caspari's work had irregularities and therefore is not reliable. He backed into his original position, before Caspari had essentially refuted him; he now claims that his data were reliable.
The Delta's papers he had claimed to be not reliable, he made them reliable. He added them all together and came up with a linear dose-response relationship that supported the foundations of the LNT. He got them in Science and promised the readers, the scientific community, that he would follow through with a detailed report on all the methodologies and all the data that he couldn't show where you could actually see variability, methods, strength, weaknesses...however he worked with his experimental system.
He never followed through with this publication. So what happened is that two papers became very significant in the radiations genetics research community: Spencer's work which showed linearity for the acute exposure, and then there was this meta-analysis that Stern did with Uphoff in Science in which they had all these slight changes...making something appear real that one year before was not real, and basically not sharing it with anybody else. Most of this was hidden in the - until some point in time - classified literature in the Atomic Energy Commission that was never broadly available. Essentially it becomes a story that needs to be discovered. And that's what I actually discovered when I went back in my studies digging through all this and found the manuscripts that other people hadn't read or seen, the correspondence of this...The interesting thing here is that there were two things going on: this paper for Science is very significant because it really reaffirmed the belief in linearity. And other people of that time started citing this and they said: "Stern worked with 50 million fruit can 50 million fruit flies be wrong? Everything points to linearity! Caspari's study had an unusually high control group. He couldn't believe its findings. Therefore it has no credibility".
The other things that were wrong, or misinformation, it's became night; night became day; false became true; true became false. And nobody was trying to follow the know...with the person who was showing you the trick, you had to follow the data.
There was a very high appeal to authority within the community. People liked Stern and Muller. Part of this was going because they really tried to affirm linearity, but they also had to protect Muller's reputation. Muller really had misled the scientific community at the Nobel prize. If Muller had his reputation damaged, what would really hurts were the arguments for the LNT. Both had to be protected at the same time - in my opinion.
Now...I was interested in how Muller responded to all this in 1950. He published two significant papers in 1950..this is really hard to believe... but Muller goes in one of these papers and says: "Caspari published his paper which deviated from Stern. But its control group was aberrantly high". So...Muller, whose data was used to support the reliability of Caspari's findings, now concludes - falsely - that Caspari's data were aberrantly high when Muller actually came to Caspari's defense!
And nobody challenges Muller! Not even Caspari challenges Muller on this!
It was a blatant misrepresentation of the record by a Nobel prize winner, in the aftermath of this deception during the Nobel prize...cause it actually gets even worse: a lie is piled upon another lie! And nobody challenges this!
And I have tracked every single communication, in letter and cable between Muller and Stern during these time periods and see how they went through and how they have tabulated...however they communicated, these are contained in manuscripts submitted for publication.
Stern also tries to take the lowest value in Caspari's study and he bounces down the value, so that he can extend the range over which he claims linearity occurs. So, even if linearity was true - which the data really did not support - you could have claimed a range of let's say 250 thousand range of dose. While Muller was rounding the range down incorrectly, Stern extends it to 400 thousands fold. He does different things in which he either wrong or dishonest, and other people actually cite him as THE authority!
It was Muller's goal, Stern's goal and their colleagues' goal to really change the risk assessment paradigm: to have the ionising radiations seen not as a threshold phenomenon but as a linear dose-response phenomenon for risk assessment purposes.
In 1955 the Rockfeller foundation provided funding to the US National Academy of Scientists to put together a very distinguished broad-base group called the Biological Effects of Atomic Radiations (BEAR-I Committee). It preceded the BEIR Committee that we currently have in the US National Academy of Scientists. There was the BEAR-I and the BEAR-II and then it shifted to BEIR-I where they just changed from "Atomic" to "Ionising". So that's the only difference in it. But the interesting thing is that I knew there had to be a battle coming up between those who supported threshold and those who supported linear. But as it turns out, if you get the transcripts of this first ever BEAR-I Committee - which I have obtained - and you read them all from cover to cover backward inside and out, you find out that there is no debate on the nature of the dose-response in the region of the low dose zone. It is accepted from the moment they walked in that the dose-response was linear. And when you look at the comments of Muller and the comments of other members of that committee in the written language prior to that committee, he (Muller) claims that in the early 1950s the decision had been made amongst this radiations genetics group that it was no longer a threshold: it was really a linear dose-response. So there was no debate! The committee came [...] with essentially a very large proportion of radiation geneticists who grew up with the same mind. And because they were of the same mind the decision was automatic: right to linearity.
When you go back and look at the literature, the one that they all go back insight, is the Spencer's study and the Uphoff and Stern's study in Science. These are the two critical references that basically were the ones upon which the switch in that BEIR-I Committee was evident and based upon. In my opinion, the key one was the chronic study, and that was the Uphoff and Stern's one. This was the one that in my opinion was pretty fraudulence in the ways that I described.
Within a year after the BEIR-I Committee came out with the recommendation, the NCRP decided to recommend that the findings that were for the germ cells mutation (linearity) be generalized to some medics dose, and that just opened up the application to cancer risk assessment.
And ever since then, it has just followed a linear dose-response relationship. And this not only had its impact on ionizing radiations, but the USCPA years later took the same rational and the same bases and applied it to [...] carcinogens, and it just generalized even further. And that's the regulatory history for cancer risk assessment in the US and essentially most of the countries throughout the world! It actually is a very terrible history.
Its foundation is based upon misrepresentation at the highest possible level from the people you're actually depending upon.
Are the original letters between Muller, Stern and Caspari available somewhere on the internet?
They are not available on the internet. I can certainly send you my copies. They are publicly available from the same sources I got them from. Muller's paper from the University of Indiana. Stern communications are at the American Philosophical Association.
However, I have published different articles concerning these letters as part of the papers. And it's not uncommon for the editors to require me to show proof of letters. So I have had to provide copies of the letters to the editors or to the reviewers that they publish here because these are very specific informations that I'm claiming. Somebody to actually affirm that I pass a peer review process, I have to provide documentation and proof to the editorial judgements of these independent journals.
They actually have to have that as backup when my work is criticized. So it's part of what it's called "due diligence to the peer review process". But I can certainly send you a copy of my letters that I have obtained. But the journals know..
I'm required to provide..If what I'm having is not generally available - and these letters weren't considered generally available, cause you'd have to purchase them - then I have to provide it to the journals.
Why do you think the Rockfeller foundation put up the BEAR Committee?
I actually don't know the answer to that question. I know that the Rockfeller foundation was a strong leader in many aspects of the biological science. It has a strong a social/political conscience. I suspect that since in 1955, I was a child during that time period, there was a lot of cold war tension. And I believe - that was the time of the atmospheric testing of the (atomic) bomb - they wanted to better understand what the public health risks might be from probably atmospheric testing..perhaps water know...
the new development of nuclear medicine, nuclear energy..It was a new world that they were entering in 1954-1955. So I think it may have been their far-sightful insights. But I haven't dealt exactly into what was truly motivating them. I'm kind of guessing right now.
Ok thanks. Next question: do you think that testing the (ionizing) radiations on fruit flies (as Muller did) could provide a good estimate of their effects on humans?
I think in qualitative terms it could. In quantitative terms it probably would not be a particularly good idea. It's very difficult to extrapolate, in quantitative sense, from one species to another. There's a lot of uncertainty with that.
It's very difficult in the world of toxicology to extrapolate even from mouse model to a rat model. Let alone from a mouse to a human model. Even after Muller's work was done, there was the research done in Oak Ridge with mice that showed 15 differences in the mutation in the mice versus the mutation in the fruit flies. Muller used just one species of fruit flies. There are many ones you could have studied. The same goes for many mouse species. There are some that are more susceptible, some that are less susceptible.
This is a very difficult area in terms of providing quantitative extrapolations. Qualitatively I think that these models are very useful. They can tell whether a mutation is or isn't occurring.
If it qualitatively happens, it suggests that you should look more carefully at the species of interest.
APlP: What's your opinion on the reason why the regulations on radiations have been based upon a scientific fraud?
The most part came out from this recommendation from the BEAR-I Committee back in 1956 in which the fundamental decision was made to switch from threshold to linearity. And everything followed from that. This was...I'd have to say... a political-philosophical decision that the radiation geneticists community had. They believed, in my opinion, that they wanted to perhaps save the world and future generations from mutations. They may have been, from what I can read, well intentioned people.
However, they basically used and gave up their scientific credibility to make decisions based upon their philosophy. What they owed the country and the world was not their philosophy: they owed the world their scientific judgements!
And then society and its political leaders could weigh how the science fit into the political judgements. But I believe that the radiations geneticists committee in 1956 essentially gave up its authority to Muller and Stern.
It's a very difficult situation to figure exactly, because all those members went to that committee with their minds made up. They all were believing in linearity. I have gone back and looked at the publications in detail of all the members of the BEAR-I committee, and there were only three or four that had significant experience with low dose studies. Most of the other ones were men of achievement, but it was another kind of achievement! They did not have experience with how to design and conduct low dose studies and the experimental nuances that you would have to know, and problems when you do those kind of studies. So in effect what they did is that they had appeals to authority to people like Muller and a couple of other ones on that committee who had the same Muller's views.
And essentially they allowed their decisions to pass through. And Muller and two or three others became the key people that decided the policies for the rest of the 20th century. It all came back to the deception of Stern's studies. It's actually very amazing!
It's hard to believe, that it turns on the dime. But it's a very narrow point where it turns to. It reminds me when the US Challenger fell out of the sky back in the late 1980s, and it was all because several o-rings should have been replaced cause they didn't function properly.
And all these very great engineers in Nasa, all these talented people...everybody thought everybody else was doing their job. But actually somebody missed the o-rings. And people died...a whole disaster. In this case people didn't follow what was going on with Stern and Caspari and Uphoff and Muller. They appealed to authority. And now we have 60 or 70 years of an LNT regulation based upon what I think was a fraud, deception, misleading, not providing all the information, substituting philosophy for science by people that actually were outstanding scientists of the day. People that we looked to for guidance and that we trusted.
Can you explain and give us an example on how a threshold value is determined?
It's interesting that you raise this question. It should be pretty easy. Basically a threshold resonates within our common experience. People watching this (video) might have a sense of a threshold when they drink wine. You might have half glass of wine and you enjoy the taste but you don't feel any sensory sensation, like spinning in your brain. But if you have two or three glasses of wine you may begin to feel the psychological effects of the wine. At some point you pass a threshold and something happens.
Below that level, there is no detectable biological effect. This is pretty much a common person's view of what a threshold is. In a statistical sense, if you go below the threshold we expect there to be what we call variability or noise within a system and you'd expect random bounds but no significant deviation from the unexposed control group. Both perspectives should agree with each other. The perspective of the common person - with the glass of wine - shouldn't be any different from the biostatistician's taking a look at the random bounds below an estimated threshold. They're both telling you the same thing. There shouldn't be any detectable biological effects below a thresholds. Under the radiation/mutation point of view, it should be safe below the threshold.
Are we in presence of a threshold in the hormetic model?
Well..the hormetic dose-response is a by-phasic dose-response. In the case of mutation, cancer and radiations we're looking at a J-shaped dose-response. So when the doses are high you see dose-response relationship such that mutations grow up proportionally (so happens with the cancer risk). But at lower and lower levels, according to the hormetic model, you reach a point which is really the opponent of a threshold, in which there is no treatment related effect compared to the control group.
And you might think that there is no effect as you lower the dose further. But actually, in the hormetic model, if you lower the dose further, we observe that the risk ditches down below the one of the control group. And therefore it shows a J-shape dose-response, compared to a straight line then a flick up in the threshold response. The hormetic dose-response is that which I spend a lot of time studying. And I find this to occur for essentially most chemical and physical stress agents - like ionizing radiations - and it's independent of the biological model or the biological organization. It occurs in the cell, the organ and the organism. And it occurs independently of the biological mechanism as well. It's a very general phenomenon and it's getting more attention today in the pharmaceutical and chemical industry and in the non-ionizing radiations area as well. Not just from a regulatory point of view but from therapeutic applications as well. So how you make these new insights on the nature of the dose-response more helpful to the public health and for therapeutic applications.
APLP: Why do you think the majority of the general public wouldn't or doesn't believe in the existence of a threshold?
Well...the way toxicology is communicated for the general public...for example: if you were to take a look at all the chemicals regulated by the international communities in their own countries, they are very similar. Let's say for drinking water contaminants - carcinogens and non carcinogens alike - when you take a look at the number of molecules that it takes - even the strong carcinogens - before you see that the response is taking place, you actually have to have an individual exposed on a daily bases for 70 years, more or less, to anywhere between probably ten billion to ten trillion molecules per day. Every day, for 70 years for powerful carcinogens to begin to show carcinogenic effects. Over 70 years, we're talking about 10E22 to 10E24 molecules to show beginnings of an effect.
If we just consider this to show how the LNT model is lacking in credibility when it's applied to anything! It's how the risk communication message has been framed; it's been held captive by regulatory agencies whose mission in many ways has been to preserve their regulatory positions in jobs, to frighten the public into concerns that weren't justified. I don't mind being challenged by those whom I'm challenging. I'd say: "show where my interpretations are incorrect". But I happen to find back to all the regulatory estimates;
I have done the calculations and I was surprised...I thought that carcinogenic agents would be much more active at lower doses than non carcinogenic agents. But actually it's about the same round of about - per day - 10 billion to maybe 10 quadrillion of molecules before you see any change in the biology! When you hold up a glass of water that has a contaminant at a drinking water standard, and you might think it's perfectly safe. And that perfectly safe might have a 100 billion molecules of a toxic substance in it.
The glass looks nice and clear. You can't see that there are 100 billion molecules (of a toxic substance) in it. You'll drink it and you'll think that it's nice and safe! And it probably is nice and safe. But in it there are maybe a 100 billion molecules of a regulated toxic substance! And this tells me how toxic is it really if we are allowed to have an approved drinking standard by a regulatory agency a 100 billion molecules in it? Yes, it probably needs to be regulated. But let's see this in relationship to the LNT - cause it says that there is no safe level and that a single molecule can initiate this pathological process, when in fact there would be 100 billion molecules and you'll need to be exposed to them for your entire 70 or 80 years life span.
You have to see that there is a discontinuity between what the regulatory policy is and what the actual scientific issues and understandings are. They really need to be addressed. The LNT concept was adopted with no scrutiny.
It was essentially pressed forward on the bases of ignorance, fear, philosophy and as far as I'm concerned misrepresentation of the scientific records by the leaders we're talking about today.
APLP: Can you talk about the process that would enable the (human) body to protect itself against radiations by means of radiation preconditioning?
The concept of preconditioning is very significant. Radiation preconditioning is in fact a subset of that. But preconditioning for the audience is seen when a very low dose of an agent is given prior to the exposure to an overwhelmingly high dose.
And you try to see whether that initial low dose given maybe a day or two before the high dose affected the toxicity of the high dose. There can be a dose that kills an animal or makes it [...]. In many cases, the prior low dose can profoundly protect against the effect of the more massive dose given subsequently. I know from work that we have done that the chemical called carbon tetrachloride that you could give a very very low dose of it - a dose that doesn't cause almost any extensible changes in the organism - and then, one day later, you give a dose that would kill 95-100% of the animals, and none of the animals dies! The low dose protects them from the subsequent situation. You see that comparably happening with low dose of radiations.
You see this happening - people listening to this might find it really odd ... good portions of us are going to die from heart attack or something related to it. I mean 40-50% of deaths in the US are due to heart related conditions. Researchers at Duke University in 1986 provided a relatively modest hypoxic stress to dogs. A day or two later, they gave a massive myocardial infartion or heart attack to these dogs. And they found that the dogs that had the mild hypoxic stress a day or two before, had essentially 70-80% less damage than the dogs that didn't! The investigators coined the term "preconditioning", and then it was applied to many other systems so that you could actually protect the brain by preconditioning; you could protect the heart by preconditioning; the liver, the skin, and many after that. People found that you could protect the body after the damage by post-conditioning! And now these concepts are being implemented in medicine. How this relates to dose-response: if you could take the preconditioning dose and you could give a very low to a much higher preconditioning dose then followed with a large dose, you'd find that the dose-response was an inverted U just like a hormetic dose-response. So preconditioning is a manifestation of the hormetic dose-response concept.
That's why I'm particularly interested in studying this phenomenon. So, it relates to the world of radiations: radiation biology; radiation therapy...there are so many things that could happen in medicine. You could use a low dose of radiations before giving the patient a massive one; you can protect him from subsequent damages. People now find ways to use preconditioning for patients before they have a massive surgery, so that they will have enhance the recovery during the surgical process. This is a wonderful new series of opportunities emerging on the health care system. I can give you another example to regulatory-wise that we published from our lab: the USCPA and many other regulatory agencies say that if you take a kidney toxin and then a second kidney toxin, the response is additive.
Two bad things: one plus one equals two. But we took inorganic mercury, which is a kidney toxin, and inorganic lead, which is also a kidney toxin. We gave the lead one day before we gave the mercury. According to the EPA, they should be additive. But because we gave them in a preconditioning sense - the lead one day before a strong dose of mercury - we reduced the mercury toxicity by 70 or 80%! It was just like what happened with the dogs then (experiment of the Duke University in 1986). It really showed that the regulatory approach on chemical mixtures, when you separate them by time within a conditional framework was basically not supported at all! It's a new toxicology today, and much of our toxicology was based upon ideas that need improvement. It's very difficult for regulatory agencies to change, to admit they made a mistake and to follow the data. They are tied into defending past decisions even when these can no longer be supported and they are generally known as wrong. And that's the case with LNT.
By the way, what quantities of mercury and lead are we talking about?
I can't recall exactly those amounts, because it's a number of years ago and the study was in mice. But in the framework of preconditioning the quantities can be very minor. For example, in rodent studies, you could take a blood pressure cuff - all of us had blood pressure taken - and wrap it around its thigh. There are protocols for which you could squeeze it a few times - tighten it up, let it go - and that's a preconditioning stress. Something as minor as that! And you could cause a damage to the heart or the brain. That preconditioning stress, if you waited a day or so, would actually result in protecting the brain from damage! It's the same concept using the low level lead against mercury. It's a stress, just like dietary stress. People have shown that if you take food every other day in the animal model, it stresses the animal in such a way that it misses food for a day. This results in the up-regulation of many adaptive responses that will protect you from subsequent stressor agents that you could get the animal in its environment. What we're learning about today is a whole series of ways that the body has to protect itself against low level, moderate level, or high level stress by essentially using "preconditioning vehicles". Even low doses of radiations can serve as preconditioning stress.
So, very significant what's happening and what will happen down the road.
When one talks about radiation, the thought of most people goes immediately to the accidents of Chernobyl first and Fukushima more recently. I read from your CV, page 24, that you participated as guest editor for a publication with the title: "Distribution of Artificial Radionuclides in the Abandoned Cattle in the Evacuation Zone of the Fukushima Daiichi Nuclear Power Plant". On page 29 of your CV, you mention your 2011 paper "Improving the scientific foundations for estimating health risks from the Fukushima incident". Can you please talk about these studies?
These were evaluations of some of the scientific literature that might have relevance to assess risk at Fukushima and perhaps other places as well. The message here is that, even though there can be improvement in the science, if you feed all your science through a linear dose-response model the interpretations are going to come out wrong. Because what you end up doing in any human study on the populations or do this using animal models, you still have to somehow extrapolate to levels that are extremely low - if you have a patient or animal exposure.
So what I have tried to do here - in the papers or elsewhere - is to try to say that our fundamental way to assess risks was wrongly constructed and it has lead to incorrect estimates of risk that applied to Chernobyl, to Fukushima and other places.
And the result of those grotesque overestimates of risk, are policy decisions aiming to shut down areas, to evacuate people, to have kinds of all actions taking place that sometimes are far worse than the exposure to agents that we have. So what we really need is a much more scientifically defensible risk assessment process that is consistent with the toxicological and epidemiological literature. Basically once you believe that a single ionization can initiate the process of carcinogenesis, there is no return, because everything then becomes fearful. And you think there's a risk with a single anything. And as I mentioned to you a few minutes ago, the amount of chemical carcinogens in drinking water - but you can relate this to radionuclides as well - we saw that in EPA drinking water standards you can be exposed to 100 billion molecules per day for an entire lifetime - 70 years - and have no increase in measurable cancer risk. None! So the model is totally wrong! And yet if you were to apply the LNT model to this, you'd be concluding that a single molecule is a concern. At least to the general public no level is safe. And it's a wrong message! It's not a scientifically defensible message! I call it a radicalised perversion of science that is now like a disease that captured government's policy when it comes to risk assessment that, in my opinion, needs to be challenged and changed. And this would play out management very differently in places like Fukushima. Because there is also a psycho-social component to this as well. And that is: when you frighten people unnecessarily, then you change the status of the whole family structure. And the mind can play strange games on the body health-wise and social relationships-wise. This is a very serious situation that needs to be confronted. And if you try to challenge this, as I'm trying to challenge this, people attack you for your models, they make up models that don't exist. All I'm looking for is that people look up at the data and let's go from there together and see what the data actually reveal.
APLP: Is the hormetic model beginning to be considered by the nuclear regulatory commissions?
I think the hormetic model is growing in its scientific acceptance and utility. It's widely used by the pharmaceutical industry. To all the one listening to this, I can tell that the anti anxiety drug, the anti seizure drug, the memory drug and many other are based upon hormetic dose-responses in the preclinical data with mice or rats models - or whatever animal they used. They are [...] in a totally pervasive actuality in the pharmaceutical literature. Even those people who totally would oppose to hormesis, don't know that most of the pharmaceutical they take to preserve their health are based upon the hormetic dose-response model. They are totally accepting it in one domain - not knowing that they are - and then in this other domain called environmental regulations they say "we can't go down this road!" when in fact they've embraced it. It's a very interesting contradiction. But I think that things have to actually change within regulatory agencies such as the US EPA. But they create the conditions of the game. In the game of risk assessment, EPA has defined risk assessment in such a way that it excludes the concept of benefit. Hormesis could actually incorporate a benefit into the equation. For example, at a low dose of radiation - or a chemical - extended our life, it would be significant to know whether they could reduce the occurrence of certain kinds of illnesses or adverse effects. If an agent extended our life for 30 years, EPA would never consider it. They would regulate the agent to avoid the benefit! It's just a way to spend a tax payer's money, as far as I'm concerned.
But the interesting thing along these lines, is that if hormesis were to show - and it could do - that a low dose below the threshold dose is harmful - i.e. a low dose of a drug enlarged the prostate gland - and it could do this in an inverted U by-phasic dose-response, the EPA would say "ok! Let's set the standards upon that! As long as hormesis shows the harmful effects!" But hormesis is not fine if it shows that something is potentially beneficial. That kind of dichotomy, inconsistency makes no sense to me from a public policy point of view and it really needs to change. Better follow the data and do what's best for the general public.
Alright! Last question: just an evaluation if possible: how would the safety costs of a nuclear power plant reduce after honestly reviewing the LNT?
I think that there would be profound savings within the community. The industry, the payers, the users are right across the borders. If LNT were replaced by a threshold model or the hormetic model, you would have profound savings from top to bottom. And the interesting thing is that not only what you'd save tremendously in terms of money, but the general health of the public would be significantly enhanced. It's difficult to understand how the public tolerates regulatory communities that impose costs on them and reduce their health status.
But that's what our regulatory position has been within the US and other countries. And it's my hope that in fact rationality and following science will lead to a reversal of these kinds actions and positions in the future
APlP: Prof. Calabrese thanks very much for your time and for being with us
Thank you very much for the opportunity

No comments:

Post a Comment