More On GISTEMP Differences

Reader “Amino” put together this video, showing how GISS has diverged over the last decade from other data sources.

[youtube=http://www.youtube.com/watch?v=6ROMzxA4A9c]

About Tony Heller

Just having fun
This entry was posted in Uncategorized. Bookmark the permalink.

21 Responses to More On GISTEMP Differences

  1. barry says:

    Why were 12-year periods chosen? After all, they are not statistically significant, especially with the satellite record, where the data has more variance, particularly in the Nino/Nina years, of which 1998 was a huge anomaly.

    And I think I answered my question with the second sentence. 1998 to 2010 is the period to present (with a full year’s data) where you get the greatest possible divergence between trends – mainly because the satellite data has a much higher jump for 1998 than the surface records. Actually, you can get more divergence if you use shorter time-frames, but that’s even sillier.

    If we choose a statistically significant period – 20 years will work for satellite and surface data – then it looks a little something like this.

    Although GISS is still the higher trend, the greatest divergence (between GISS and HadCRUt) is 0.03C per decade. GISS diverges from the satellite record by about 0.2C/dec.

    Extend the trend analysis to present (1991 – July 2010 inclusive), and the trend between GISS and the satellite records is reduced to 0.01C/dec.

    (If a trend is run from 1998 to present, all records how a positive trend – not statistically significant, of course. )

    It’s interesting to note that of the various temp records, including the recent efforts on blogs by skeptics and others, GISS is consistently the lowest of the long-term trends. It’s only in the last 30 years that GISS has begun to diverge. Either it’s poor data, poor methodology, or GISS is successfully capturing the greater heating of those parts of the world not covered (HadCRUt), or not well covered (UAH/RSS) by the other data sets – the poles. We know that the North Pole has warmed considerably in the past three decades – about 3 times the rate of the globe. As HadCRUt doesn’t cover it, we’d expect GISS to show a higher warming trend for the last 20 – 30 years.

    (UAH No Pole trend – 0.47C/dec)

    • Amino says:

      Its clear you don’t know the first thing of what you’re talking about. Either that, or you are attempting to distract people because what you see youy don’t like..

      It time period doesn’t matter. How the data sets relate to each other is what is at issue.

      • Amino says:

        typo:

        attempting to distract people because what you see youy don’t like..

        should read

        attempting to distract people because what you see you don’t like what you see.

      • Amino says:

        typo:

        attempting to distract people because what you see youy don’t like..

        should read

        attempting to distract people because what you see you don’t like.

    • Amino says:

      1998 to 2010 is the period to present (with a full year’s data) where you get the greatest possible divergence between trends – mainly because the satellite data has a much higher jump for 1998 than the surface records.

      You did not mention HadCRUT. But maybe you did not know HadCRUT is not taken from satellite.

    • Amino says:

      it’s good that you talk a lot barry so people can see you don’t know what you’re doing

    • Amino says:

      It is not a 12 year time period. It is a 31 year time period. Do you see why?

    • Amino says:

      barry says:
      September 19, 2010 at 3:09 pm

      Why were 12-year periods chosen? After all, they are not statistically significant,

      Why exactly did you say this?

  2. Amino says:

    Why a 12 year trend? Because of the very accusation you bring up here—cherry picking. This accusation has been made by you, jeez, Mosher, Tamino, and everyone else that doesn’t take the time to carefully look everything over before they accuse. If I just used one 12 year trend from 1998 to 2010 then you could get away with saying I cherry picked. But I used EVERY 12 year trend from 1979 to 2010. That’s the beauty of what the graphs show—the wide divergence GISS takes in recent years is not created by manipulating data with cherry picking. Rather, these graphs show GISS divergence from the other data sets is real and needs to be looked at much more carefully. It also shows, again, that GISS cannot be trusted for accurate data. All of the graphs show they cannot be trusted.

    Can you understand what I am saying? Do you want to understand?

  3. Amino says:

    especially with the satellite record, where the data has more variance, particularly in the Nino/Nina years, of which 1998 was a huge anomaly.

    Temperature readings have something wrong with them? You are trying to make people believe we shouldn’t trust data? Satellite readings are less reliable than the GISS product?

    You see, that is what one must think in order to trust GISS.

    barry, you don’t know what you are doing. You are just rambling with the usual party line we hear all the time.

  4. Amino says:

    1998 was a huge anomaly

    Back to that ole ‘1998 doesn’t count’ thing you guys do.

  5. barry says:

    If I just used one 12 year trend from 1998 to 2010 then you could get away with saying I cherry picked. But I used EVERY 12 year trend from 1979 to 2010. That’s the beauty of what the graphs show—the wide divergence GISS takes in recent years is not created by manipulating data with cherry picking.

    I noticed that there was a lot of variance with each 12-year trend. Sometimes GISS was at the top, sometimes the satellite data had the highest slope. GISS diverges most greatly by the last 12-year trend because the last few years have been hotter in that record than the others, and because 1998 was a higher anomaly in the satellite record.

    I think there are problems with all the records. None are perfect. It’s not really a mystery why GISS shows a higher trend 1998 – 2010.

  6. barry says:

    But you didn’t answer my question. I asked – ‘why a 12-year trend’? Your answer was – ‘I used many of them’. But why didn’t you use 15 year trends? Or 20? Or 4? Why did you choose particularly to work with 12-year trend lines?

    Before you answer, let’s try a little thought experiment. I have calculated the trend lines for each of the temp records from 2006 to August 2010. The trends for the satellite records are nearly 3 times greater than for the surface records (positive slope). I could conclude that global warming has returned with a vengeance, and I could point to the satellite data to indicate the severity of the change. Or, I could say there must be something fishy about the satellite records because they are double the trend even of the IPCC (given as 0.2C/dec over 20 years).

    Is the time period I’m using illegitimate? Or are my observations well-grounded?

    (2006 and 2010 show the same declining trend in el Nino oscillation throughout the year – I’m measuring ‘peak to peak’)

    • Amino says:

      you need to read my comments again, i already explained why i used 12 years

    • Amino says:

      i made it quit clear in the video that it was not about trends but about the differences between the sets.

      i left that explanation at the end for 25 seconds so your very claim of current trend could not be made. how did you miss that?

      maybe you didnt watch the video until the end.

  7. Amino says:

    (2006 and 2010 show the same declining trend in el Nino oscillation throughout the year – I’m measuring ‘peak to peak’)

    peak to peak has got nothing to do with the point of the video

  8. barry says:

    My question was “why did you choose to work with 12 year trends?” Your answer was that you used lots of them so as not to cherry-pick. That doesn’t answer my question.

    Why did you choose 12-year trends instead of any other time period?

    I ask because 12 year global temperature trends fail the statistical significance test. You need to work with at least 17 years data to achieve 95% confidence level. 20 years is better. For the satellite records, which have more variability, 20 years is the absolute minimum, but longer periods are better for statistical significance.

    Do you know what ‘statistical significance’ is?

    i made it quit clear in the video that it was not about trends but about the differences between the sets.

    Yes, and I understood that (I watched the whole video and read all the comments).

    Would it be any worse or better if I decided to work with 4, 10, 20, or 25-year trends? If I worked with 4 year trends over the last 30 years, the satellite data would often be way out of whack (much higher in many cases) than the surface temperature. I showed this with the graph I linked of the trend from 2006 to present. The satellite records are both around 2 to 3 times greater than the surface records. It’s the opposite to your conclusion, so I ask you – is a 12 year trend a better metric than a 4.5 year trend to establish differences between the data sets, or does it not matter at all?

    Why did you select 12 year time periods to make comparisons as opposed to any other? Is there a scientific basis for doing that?

  9. barry says:

    Are you saying the date was specifically chosen to emphasise the divergence?

    The reason I find the particular choice of 12 year trend lines fishy is that it is exactly the period required to show the greatest possible (GISS) divergence to present. Use any other period, and GISS doesn’t diverge so much to present, and if you use shorter ones (like the trend since 2006), other data sets show higher trends and great divergence – even with peak matching (I know you know what I mean here, Steve, even if Amino doesn’t).

    As the video shows, GISS is not always the highest using 12 year trends over the satellite record, but selecting 12 years – a period that fails statistical significance – is precisely what’s required to get the big divergence to present.

    It’s no mystery that GISS shows a higher trend for recent years. Their anomalies have been higher over the last few years than the other data sets (relative to baseline). Whether that’s valid or not is another matter.

    Steve, you must be familiar with the concept of statistical significance. Do you not think it worth qualifying the validity of Amino’s trend comparisons with that metric? According to the scientific standard (across all sciences, not just climate) the data samples used here fail the statistical test that says they are confidently distinguishable from zero trends. Linear trends are not displaying real data but approximations anyway. Even with a bare minimum of statistical significance (17 years for surface in this case, more for satellite records), all we could say, with 95% confidence, is that the trends are upwards. With 25 years of surface data, we could be more confident about the actual amplitude. Using that time period, starting in 1979 and shifting forward a year to compare, there is far less divergence and GISS would still top out towards the end. But at least we’d be working with time periods that better reflected actual trends.

    Amino, in case you’re unfamiliar with statistical significance, here’s a fair primer.

    In statistics, a result is called statistically significant if it is unlikely to have occurred by chance….

    The amount of evidence required to accept that an event is unlikely to have arisen by chance is known as the significance level or critical p-value: in traditional Fisherian statistical hypothesis testing, the p-value is the probability of observing data at least as extreme as that observed, given that the null hypothesis is true. If the obtained p-value is small then it can be said either the null hypothesis is false or an unusual event has occurred.

    http://en.wikipedia.org/wiki/Statistical_significance

    and

    In statistical significance testing, the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. A closely related concept is the E-value, which is the average number of times in multiple testing that one expects to obtain a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. When the tests are statistically independent the E-value is the product of the number of tests and the p-value.

    The lower the p-value, the less likely the result is if the null hypothesis is true, and consequently the more “significant” the result is, in the sense of statistical significance. One often accepts the alternative hypothesis, (i.e. rejects a null hypothesis) if the p-value is less than 0.05 or 0.01, corresponding respectively to a 5% or 1% chance of rejecting the null hypothesis when it is true.

    http://en.wikipedia.org/wiki/P_value

    Significance testing is fundamental to trend analysis. In the case above, all the 12 year periods fail to achieve statistical significance. This, as I mentioned before, is the basis for Phil Jones comment about 1995 – 2009. The trend was positive, but he noted that the time period was too short to achieve statistical significance (only one more year was needed to achieve 95% confidence level, as Lubos Mot pointed out on his blog).

    Unless you test for significance, you can say very little about the trends you are working with. Yes, GISS diverges greatly at the end of the temp record using the 12 year time period, but these trends are too dubious to be considered reflective of anything.

  10. Pingback: More On GISTEMP Differences (part 2) | Real Science

Leave a Reply

Your email address will not be published. Required fields are marked *