http://scienceandpublicpolicy.org/reprint/2010_hottest_year_ever.html
Disrupting the Borg is expensive and time consuming!
Google Search
-
Recent Posts
- Ellen Flees To The UK
- HUD Climate Advisor
- Causes Of Increased Storminess
- Scientist Kamala Harris
- The End Of Polar Bears
- Cats And Hamsters Cause Hurricanes
- Democrats’ Campaign Of Joy
- New BBC Climate Expert
- 21st Century Toddlers Discuss Climate Change
- “the United States has suffered a “precipitous increase” in hurricane strikes”
- Thing Of The Past Returns
- “Impossible Heatwaves”
- Billion Dollar Electric Chargers
- “Not A Mandate”
- Up Is Down
- The Clean Energy Boom
- Climate Change In Spain
- The Clock Is Ticking
- “hottest weather in 120,000 years”
- “Peace, Relief, And Recovery”
- “Earth’s hottest weather in 120,000 years”
- Michael Mann Hurricane Update
- Michael Mann Hurricane Update
- Making Themselves Irrelevant
- Michael Mann Predicts The Demise Of X
Recent Comments
- conrad ziefle on Scientist Kamala Harris
- Tel on Ellen Flees To The UK
- Petit_Barde on Ellen Flees To The UK
- dm on Scientist Kamala Harris
- Gamecock on Scientist Kamala Harris
- Richard E Fritz on The End Of Polar Bears
- Richard E Fritz on Scientist Kamala Harris
- Richard E Fritz on Scientist Kamala Harris
- Richard E Fritz on Causes Of Increased Storminess
- Richard E Fritz on HUD Climate Advisor
I like the SPPI pdfs, they make a great reference collection.
Congrats Steve, well done.
Prize goes to the first person who notices the typo before it gets fixed on SPPI’s web site.
I only looked as far as the cover but…
Steven Goddard, not Steve Goddard?
Or, if it was just printed, why is it a “Reprint”?
Ian Hamilton below noted the “Is” versus “Was” thing, but I don’t think so. If 2010 actually was the hottest year, it still is.
..and the weather isn’t getting “weirder”, either:
http://online.wsj.com/article/SB10001424052748704422204576130300992126630.html
most excellent – typo or no typo
Thank you!
Nice boiling hot image there on the cover. I love that his margins of error are an order of magnitude greater than the delta above 1998. Reminiscent of Steig et al. But the Hadcrut data really brings the hammer down.
I would have thought ” Was” 2010 the hottest year ever, be a better title as the document is dated this year. Either way a most informative pdf. Thanks for posting.
Good job. It all makes perfect sense.
MrC
http://media.theage.com.au/national/selections/kevin-rudds-holiday-heaven-2187208.html
Kevin Rudd 5 bedroom house – for the future? What about the greatest moral challenge whats the carbon footprint?
Hottest year ever?? WTF
What about the mega La Nina’s in Nazca in the past that’s why the Nazca lines were built most likely, to appease the water gods
http://www.terracycles.com/joomla/sections/1-earth/10-elninovolcanism.html
http://www.spacedaily.com/news/climate-02y.html
What they discovered caught the attention of the scientific community worldwide: Every 1,500 years or so Greenland’s climate had undergone temperature changes of up to 59 degrees Fahrenheit.
http://www.terracycles.com/joomla/sections/1-earth/10-elninovolcanism.html
Volcanism the ignored variable, especially when it comes to El Nino cycles
http://joannenova.com.au/2011/02/announcing-a-formal-request-for-the-auditor-general-to-audit-the-australian-bom/
Australian Data about to be audited yippeeee!!!
Very nice Steve! I’m not sure about the typo, but apparently a graphic didn’t load between pg 16 and 17. Anyway, congrats! I’ve got it saved for reference!
The La Nina graphic wasn’t what I expected, but then I don’t follow it very closely. It seems to have a sharp uptick recently. Are we moving to a El Nino this quickly? Does anyone know?
I must be missing something:
Combined global land and ocean annual surface temperatures for 2010 tied with 2005 as the warmest such period on record at 1.12 F (0.62 C) above the 20th century average. The range of confidence (to the 95 percen…t level) associated with the combined surface temperature is +/- 0.13 F (+/- 0.07 C).
The global land surface temperatures for 2010 were tied for the second warmest on record at 1.73 F (0.96 C) above the 20th century average. The range of confidence associated with the land surface temperature is +/- 0.20 F (+/- 0.11 C).
Global ocean surface temperatures for 2010 tied with 2005 as the third warmest on record, at 0.88 F (0.49 C) above the 20th century average. The range of confidence associated with the ocean surface temperature is +/- 0.11 F (+/- 0.06 C).
http://www.noaanews.noaa.gov/stories2011/20110112_globalstats.html
Where is 0.01 claimed and where is (+/- 0.3) confidence that disputes it?
GISS shows 2005 0.62 and 2010 0.63 http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt
I was being conservative about 0.3. 0.46 is more realistic http://scienceandpublicpolicy.org/images/stories/papers/reprint/uncertainty_global_avg.pdf
Do you still feel like you are missing something?
Why would there be such a difference between the NOAA climate record (linked to by Ron above) and the NASA GISS climate record (linked to by Steven above) in this area? How do NOAA and NASA GISS explain this difference? Do they even bother to explain it? Which, if any, climate record can be trusted, or how would you rank the “trust” or “accuracy” of each climate record?
Yes I still feel like I’m missing something. NOAA claims confidence of 0.07 C. http://www.ncdc.noaa.gov/cmb-faq/faq-globalprecision.php
a half a degree confidence is like saying it is impossible to measure regardless of technology. On the surface that doesn’t pass the smell test considering that we are talking about thousands of measurements made in the last 5 years.
I tried to find even one person that said why they thought 0.46 was accurate but could only find things like SAPP. I might as well ask Sara Palin what she thinks.
The data presented by NOAA claims to be statistically significant. I did not find the explanation that a single station has uncertainty a valid argument against their claim. According to that the logic of SAPP since a neuron has firing uncertainty we should not be able to throw a baseball accurately.
Did you actually read the paper? They are missing data across more than 10% of the planet. How could their accuracy be several orders of magnitude higher? It is ludicrous for you to defend them.
The Sarah Palin defense is pretty pathetic BTW
I’m not getting that. Did they lose data on 10% of the planet since 2005? Again you are saying that it is impossible to measure (unless we have 100% of the data). I just don’t buy the argument that global temperature cannot be measured. If we were arguing the temperatures from the 1800 you would have a point but we are only talking about the last 5 years.
I used Sara for emphasis. Lame? Yes. I did add that, in my opinion, the estimate of 1/2 a degree is not logical based on the inaccuracy of a single station. That’s why we don’t rely on a single measurement.
So you didn’t read the paper? Their own map (in the paper) shows very clearly the regions of the Earth where they have no data.
Steve, this is a rather amusing discussion, but if I may interupt…… Ron, here is a comprehensive study of the various temp sensors now employed. And an assessment of their accuracy, or lack thereof. It’s a fairly easy read and quite difficult to dispute. http://journals.ametsoc.org/doi/pdf/10.1175/1520-0426%282004%29021%3C1025%3ASAEEIA%3E2.0.CO%3B2
From the conclusions, “The MMTS sensor and the HO-1088 sensor use the
ratiometric method to eliminate voltage reference errors. However, the RSS errors in the MMTS sensor can reach 0.3–0.6 under temperatures beyond -40 to +40C. Only under yearly replacement of the MMTS thermistor
with the calibrated MMTS readout can errors be constrained within +/- 0.2C under the temperature range from -40 to +40C. Because the MMTS is a calibration- free device (National Weather Service 1983), testing of one or a few fixed resistors for the MMTS is unable to guarantee the nonlinear temperature relations of the MMTS thermistor. For the HO-1088 sensor, the self-heating error is quite serious and can make temperature
0.5C higher under 1 m s21 airflow, which is slightly less than the actual normal ventilation rate in the ASOS shield (Lin et al. 2001a). ….Even so, the HMP35C sensor in the AWS network can experience more than 0.2C errors in temperatures from -30 to +30C. Beyond this range, the RSS error increases from 0.48 to 1.08C…….For the USCRN PRT sensor in the USCRN network, the RSS errors can reach 0.2–0.34C due to
the inaccuracy of CR23X datalogger,…. and it goes on.
Any claim to an accuracy of +/- 0.07 is superfluous. We simply don’t have the mechanisms in place that can do such.
I forgot to add the most important part, …..“Acknowledgments. The authors wish to acknowledge
the financial support provided by the National Climatic
Data Center (NCDC) and the USCRN program for this
study. We are thankful for the valuable reviews of this
manuscript by Dr. George E. Meyer in the Department
of Biological Systems Engineering, University of Nebraska,
and Dr. Tilden P. Meyers at the Atmospheric
Turbulence and Diffusion Division of the NOAA/OAR/
Air Resources Laboratory.”
It seems NOAA is fully aware they can’t legitimately claim 0.08 accuracy, but they do anyway.
Very interesting article. I can see the systematic error in measurements over 40 degrees C reduces the reliability of the readings at that level. It appears to change the confidence from 0.2 C to 0.4 C. I get that. What I don’t get is how that changes the overall confidence level. You cannot ignore that increasing the number of measurements improves your confidence level overall.
http://en.wikipedia.org/wiki/Law_of_large_numbers
Having 1000 thermometers in the US does not improve accuracy if you have almost no thermometers in Africa or the Arctic.
Ron, that was for only 2 of the 4 types studied. And even then one only got +/-0.2 for accuracy if the temps stayed within the +/-40 C.
As Steve points out, we’ve virtually no coverage in the Arctic and Africa(as well as the Antarctic). The fact is, we don’t know what was occurring on the ground or ice of these places. The various studies do show that uniformity of temperatures don’t occur. In other words, from year to year or any other period of time, some places warm while others cool even if the overall mean has increased. We don’t know. One angst I have in the climate debate is the amount of certitude people ascribe to certain things. Such as the confidence or accuracy of our readings. Heck, for our lack of knowledge and coverage, 2009 could just as easily actually be the hottest year ever.
I’d also like to point out that you’re making a comparison of 2010 to 2005 when no such direct comparison exists. The values are derived from a comparison to a baseline that started well before anything other than mercury thermometer were employed. So, the comparison is oranges to apples and from apples to oranges and then comparing the resulting values.
We are comparing 2010 and 2005 with all of the 1900’s. If that is wrong then fine but that doesn’t change that both 2010 and 2005 are compared to the same temperature. I understand your point about the accuracy of measurements and the differences in the temperature at different locations. Keep in mind what we are trying to measure. I agree you will be way off trying to measure the temperature of the US by taking one reading in one location. What you get is a number that is closer to the actual temperature of the US the more readings you do. We get a temp that is closer to reality the more we measure. If we found all of the sudden there was global warming because we never measured Africa and now we do that would be a valid argument. That we don’t get 100% of the data is not a valid argument.
GISS extrapolates data across the Arctic where they have no thermometers. The artificial data has the largest anomalies (by far) and those large imaginary numbers have a huge effect on the global average. Hansen’s error is off the charts, yet he claims a record using 0.01C precision. Complete bullshit.
Ron, we’re not anywhere close to %100 and the distribution isn’t symmetric. When referencing “hottest year ever”, we’re not discussing U.S. temps. We’re discussing world temps. As Steve pointed out, you could put 1 million more thermometers in the U.S. and England, or any other place thermometers already exist, and you still won’t be any closer to knowing the world’s temp.
But, while we’re on the discussion, you are aware GISS and NOAA have systematically removed a majority of the thermometers from the data base, right? So, if one was to apply the “Law of large numbers”, we’ve lowered the accuracy within the last several years.
Here’s a video that illustrates both the lack of proper coverage and the inversion of the “Law of large numbers”.
http://wattsupwiththat.files.wordpress.com/2008/03/stationhistory_v10.wmv
You can also view some maps and read a bit about it here,
http://climateaudit.org/2008/02/10/historical-station-distribution/
“In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.”
http://en.wikipedia.org/wiki/Law_of_large_numbers
Unlike the rolling of dice which is a fixed set of possible integer values in the range 1 to 6 where the Law of Large Numbers might contribute relevant information it doesn’t seem to apply to the large numbers of temperature sensors for a number of reasons.
First, there is no “error” in rolling dice as there is in a temperature sensor.
Second, missing readings would be similar to missing dice throws and how you treat that is problematic.
Third, temperature data isn’t a fixed set of numbers, it’s highly variable and influenced by the seasons, day night cycles, and many other factors. As a result there wouldn’t be a convergence on an “expected” value due to the nature of Nature, temperatures don’t have an “expected value” unless you believe they do!!! Nature certainly doesn’t have an expected value, it’s humans that “expect”.
Fourth, as pointed out elsewhere accuracy and full coverage in one area doesn’t mean accuracy or full coverage elsewhere on the planet. No full picture is available now.
Fifth, even if we had a grid of temperature sensors every 100km across the entire planet over land and over water we’d still have a resolution problem. Micro climates can be quite small, sometimes cross the road takes you into a different climatic zone with different temperature, humidity and other weather parameters. Heck in Hawaii they have 21 of the 22 climate zones and you see this tiny climate or micro climate zones all over.
Sixth, measuring the temperature of the Earth’s surface of land and ocean is problematic. In many ways it’s like the problem of measuring the coastline of Great Britain, the more accurate your ruler the larger this coastline. It’s actually a fractal problem. The main problem is how fine grained a resolution is “sufficient” for measuring the planetary temperature? 2,400km? 1,200km? 1,000km? 250km? 100km? 10km? 1km? I’d say that I’d trust the data better if it’s at the 1km resolution of the sensors! That way we’d be getting down to micro climate zone resolution. Certainly 1,200km radius is utterly not acceptable and as we see that chunky resolution (when compared with 250km) creates a warming bias that is unacceptable if one values honesty, accuracy and integrity in science. Maybe back in the day when Hansen wrote his horrific paper in 1987 that allowed for such inaccurate data fabrication methods it was fine as a guesstimate but it’s no longer acceptable. We need as much accuracy and resolution as posisble especially when Billions upon Billions will be spent based upon the data. Basing that spending upon fabricated data is fraud, especially when it is now known to be fabricated data that generates a false warming bias.
Let me try one more time. I did read the paper although I did not read it throughly (I would if I had more time). I can see the missing data. How does the missing data which is missing through the full period we are talking about invalidate NOAA’s conclusions? Can you explain how the missing data, which is consistently missing from 2005 – 2010 as far as I can tell, increases the confidence level to 1/2 a degree C? You may be able to make the argument for data from 1800 but not from 2005 – 2010. I just doesn’t hold up.
The article very clearly concludes that 2005 and 2010 were 0.62 C above the 20th Century average. Are you arguing the 20th Century average, the 0.62 C difference or just the 0.07 C confidence level?
This article is about GISS. Why are you discussing NOAA?
Excellent paper Steve. Very interesting. Learning a lot from it.