Intelligent scientists in legitimate professions understand that when working with large data sets, error distributions tend to be quite random – and the best way to deal with them is to assume that they average out to zero.
This understanding also provides a way to find if the error distribution is skewed, as would occur if there has been selective dropout of rural stations or if the data has been deliberately tampered with.
The approaches of using anomalies, gridding and infilling mask systematic changes to the data, and must be avoided when doing this sort of analysis.
If the data set is legitimate, then random dropout of monthly data or movement of stations will have very little effect. Some stations will get warmer, some will get cooler. Trying to “correct” for this opens the door to confirmation bias – or worse.
The fact that there are systematic adjustments cooling the past and warming the present seen in nearly every state and city, throws up a huge red flag that the adjustments are not legitimate. Why would anyone want to hide this information by using anomalies, gridding and infilling – which would be unnecessary in a legitimate data set anyway?
Except for Urban Heat Island effect. Correction for that should result in an across-the-board reduction in recent temperatures relative to past temperatures.
That is correct. The only legitimate systematic bias goes the opposite direction of the adjustments
When did the first known/recognized use of gridding appear?
Year?
Paper?
Author(s)?
.
In an honest data set, gridding would be useful. In a corrupt data set, it obfuscates.
Anthony needs to respond legimately and answer some questions. He needs to answer his critics, especially on the skeptic side. I think he is wrong here, and should come here, or post something himself on his site in answer to this data question. Issue is too important to get in a pissing contest. Would be great if McIntyre could weigh in as well. Be that as it may, I think Anthony is wrong on this. I love his site though.
Second time today to paste this one up
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
Yes the sign should be random and the magnitude should decrease over time as the accuracy and precision of the measurements improve. Not what that graph shows, and as I wrote on another thread here, it makes no sense unless you consider that something fishy is going on.
The same phenomenon exists with sea level only you have to generate your own graph of changes made to the data.
That graph is still an absolute head-shaker. How on Earth can anyone look at that and not at least suspect systematic bias? One assumes that the purpose of the adjustments is to make the data fit more closely with reality. What kind of data set is it where the raw measurements taken a hundred years ago are consistently accurate and yet the modern measurements are consistently reading too cold? Yes, it is possible to come up with some bizarre type of reason — but it is a plausible reason?
If the corrections were legitimate then that would imply that the data out from these ‘adjusted’ stations were known in advance, and their errors were known (as they know what the answer should be). If this is the case and the data from these stations can be ‘corrected’ and infilled, then why can’t this logic be extended and applied to all stations. If this can be done for all stations then close all the stations but one and just calculate/adjust/infill from that one US representative station.
Question
How does anyone know there are too few stations actually doing the measuring?
Didn’t Gavin say he could calculate the global temp with just a couple of hundred stations? I tend to agree with him.
An interesting idea. 😉
I’m still left wondering as to how many stations are enough.
What I don’t understand/am not convinced of is: even if some meaningful “global temperature” can be calculated, what correlation/meaning does it have with respect to global climate? I think that any kind of correlation would be very difficult to justify.
The core issue here is … is Mankind affecting Temperatures via our extra CO2. Since the answer appears to be “No” or very little … .25 degrees due to CO2 over the past 100+ years… then there is no correlation .. or very little correlation between Humans and Climate.
Thus we don’t need to stop doing anything… but I’m a big Conservationist, so let’s not wean ourselves off oil because of some Hoaxer argument.. but because it is a limited resource which will eventually run out.. cause dramatic sociological change and a disaster when it becomes scarce… thus we need renewables and Nuclear as a buffer for when this happens.
Philippe Jones says….
At least you are rational about the need to move to other energy sources.
As a chemist I hate the idea of burning up such a useful resource as oil. (Think plastics, medicines…)
PS I agree with using the word Conservationist to differentiate yourself from the eco-nuts.
Actually temperature is a really miserable way to measure energy since it completely ignores the latent heat of evaporation (H20) which has not necessarily remained constant.
http://www.climate4you.com/images/NOAA%20ESRL%20AtmospericSpecificHumidity%20GlobalMonthlyTempSince1948%20With37monthRunningAverage.gif
I really laugh at all the extra digits past the decimal point as if they were actually significant.
Interesting timing; a couple of us were just commenting on this point in a previous comment thread.
Yes, I just finished posting a comment similar to this.
Seems simple enough. So now I would like to hear from the derision section why you are wrong. Can you dial them up?