Nick Stokes Busted : Part 5

Nick and Zeke’s favorite justification for NOAA data tampering, which turns US cooling into warming – is “changing station composition” – i.e. the set of USHCN stations isn’t identical from year to year. In this post I examine that rationalization.

Rather than attempting to adjust the temperatures, let’s do a much more rigorous experiment – and simply use a set of stations which haven’t changed. There are 747 US stations which were continuously active over the past century. Examining them, they show exactly the same pattern as the set of all stations.

Using the set of 747 unchanging stations, maximum temperatures have declined over the past century – just like they do in the set of all stations.

Here is the equivalent NOAA “adjusted” graph for all stations. NOAA has turned cooling into warming via data tampering.

Climate at a Glance | National Centers for Environmental Information (NCEI)

Looking at the set of 747 unchanging stations, the first hot day of summer is coming later.

The last hot day of summer is coming earlier.

The frequency of hot  days is declining.

We can already see that the adjustments are garbage. But lets’s proceed.

Now let’s try using a set of monthly data which is randomly chosen from month to month, so the station composition is changing dramatically every month. This experiment shows the exact same patterns as the set of all stations, indicating that the USHCN raw data is very robust, and changing station composition has little impact on patterns.

Now let’s try using only even numbered USHCN stations. Again, we see the same pattern as the set of all stations.

Now let’s do the same thing for odd numbered stations. Again, the same pattern.

Finally, let’s look at the same set of graphs for all USHCN stations. Again, exactly the same patterns.

In the past, Gavin Schmidt at NASA has stated that we don’t need very many US stations to make a robust temperature record – and he was correct. The USHCN stations were chosen precisely because they were robust.

Obviously there are deterministic ways we could force changing station composition to impact the trend (like intentionally removing southern states after the year 1970) – but the random changes to USHCN station composition over time have very little impact on the trend.  Nick and Zeke are simply using that as a smokescreen for NOAA to hide their data tampering fraud.

This entry was posted in Uncategorized. Bookmark the permalink.

92 Responses to Nick Stokes Busted : Part 5

  1. Gerald Machnee says:

    747 – that is a familiar number!!

  2. Steve Case says:

    Twenty-five graphs. Is that the most for any one of Tony’s posts?

    • Colorado Wellington says:

      … they gonna do something or just stand there and bleed?

    • Tom says:

      Steve, what is your point?

      Each graph that I reviewed was pertinent and instructive. The use of even and odd numbered stations was a simple, easily understood, and an independent random data set and was quite clever. In a macro sense, it should adjust for hot (and/or cold) outliers in the homogeneous result. Don’t you want to see them? Which graph do you think is extraneous and should be deleted and why?

      If you have objections about the data, methodology, results or interpretation then we (I) are ready, willing and able to weigh what you say.

      • Steve Case says:

        Maybe I should have said, “WOW – Twenty-five graphs!” That represents a lot of work.

        I put up graphs from time to time and my production of those things isn’t anywhere near the what Tony does. I admire what he does. Some day he’s going to be regarded as a National treasure.

        When I can, I check his stuff out – and it’s on the money. When it comes to the theory of Global Warming/Climate Change – he’s a one-man wrecking crew.

  3. jackson says:

    Interesting experiment.
    Nice work.

  4. kyle_fouro says:

    Isn’t their method bogus regardless because of the fact that it requires so much “quality control” to begin with? I personally would like to see how many statisticians approve of how things are currently done

  5. sunsettommy says:

    Nick Stokes, suddenly vanished when I posted HARD evidence that USHCN is not obsolete,in fact is still being fully updated DAILY for Minimum/Maximum temperature,Precipitation and T average.

    • kyle_fouro says:

      They just play climate science bop it until someone happens to bring forth a hard fact (like you did) that pushed them into a corner.

      “Homogenize it, adjust it, obsolete it, compare it, etc”

    • angech says:

      sunsettommy says:
      October 7, 2017 at 5:24 pm
      Nick Stokes, suddenly vanished when I posted HARD evidence that USHCN is not obsolete.
      Only from sight.
      He is still out there in the ether reading your words and ready to go on a different subject.
      Thank you for straightening out the drivel.

    • richard says:

      If you follow the comments section in certain newspapers, in the environment section, you will find a few names that write exactly like our friend Nick- he is a busy man.

  6. Andy DC says:

    Even I know enough to realize that you need a surprisingly small sample size to discern a meaningful trend.

  7. GW Smith says:

    Great job, Tony! I love to see you destroy their most infamous charts.

  8. Adamant de-Nye-er says:

    Well done, and a good demonstration of statistical principles as well.

  9. TedL says:

    This is actually quite serious. What Tony has done is perform a forensic audit of the US data as presented on the “Climate at a glance” site using the underlying daily data which Tony asserts has escaped adjustment. Tony’s results are dramatically different from what you get at Climate-at-a-glance. You are left with two possible conclusions – Tony’s software generates bogus results or Climate-at-a-glance is wrong. Tony has made his code available to all and thus far nobody has pointed out any bugs that would reverse the longterm trend from that shown on Climate-at-a-glance. So all you climate alarmists get to work looking at Tony’s code. I don’t have the expertise to conduct such a review, but the deafening silence from the other side on this matter leads me to conclude that the fault will not be found in Tony’s software. Thus it is more likely that Climate-at-a-glance is wrong, a conclusion that readers of this blog arrived at a long time ago. So the next interesting question is whether the Climate-at-a-glance records are simply erroneous, or are intended to perpetrate a fraud, the latter possibility being something long time readers of this blog have also likely concluded. Fraud is harder to prove, but given the ease with which a third party – Tony – was able to demonstrate that the Climate-at-a-glance records are wrong, it seems inescapable that the keepers of those records are, at a minimum, grossly negligent.

  10. angech says:

    Tony persists in using the data sets that USHCN says are real when even the ones they claim are consistently active are not, some, a lot, are infilled as well.
    ZEke put these discrepancies down to death and retirement of operators at many of the more rural/ difficult sites and equipment failure.
    At one stage in winter ? 5 years ago there were less than valid 700 reporting stations so 747 cannot be right. Stations that are offline, that have been retired or lost their operator are made up with data from neighbouring often not USHCN sites to keep the “validity” of the original sites “intact” as otherwise USHCN cannot fulfill its claim to be an ongoing continuous recording site.
    A lot of the data base used at one stage originally was not the full 1218/1219 stations but a subgroup of approx 500 stations. Not sure if this is still the case.

    • RW says:

      Sounds like more dust as Tony might say.

      These graphs show, as Tony has essentially said, that there is more than enough redundancy in this Goliath of a data set to render the ‘issue’ of temporal inconsistency in a subset of stations a total non-issue.

      This is rudimentary statistics. The station locations literally dot the entirety of the u.s. You can ditch half the data set and still get virtually the same result.

      Next step is to trace the adjustments (or model them). We need to know beyond reasonable doubt whether or not the adjustments are nothing but data torture and data snooping.

      Maybe machine learning / multivariate pattern analysis might help.

      What is the data-driven answer to the question “what makes a station’s data more or less likely to be removed or included and if so, adjusted and by how much?” And does the data driven answer match the answer provided by NOAA?

      • rw says:

        Some good ideas here. Reminds me of some sleuthing done on the Soal-Shakelton ESP data years ago, showing how Soal had fixed the results by changing Shakelton’s 1-guesses into 5’s when the target was a 5. (Earlier, it was shown that the pattern of results failed some statistical tests, which didn’t prove the data were corrupt but allowed for that possibility – but I can’t remember the details.) It won’t be as easy to establish anything in this case, but techniques like the ones you suggest might come up with something.

        From a psychological point of view, my guess is that they avoid direct alterations, relying on automatic procedures suitably applied. That way they can among other things kid themselves about what they’re doing.

        In general I like the way that Tony uses data. Earlier, I was quite taken by the use of frequency of days > k degrees (which of course should show the same trends as the aggregated (aka “average”) temperature).

  11. Ben Vorlich says:

    I’m curious, has anyone actually identified from the data when something changed at these stations? For example there must have been more than one operator taking the readings, it is more than likely that equipment was changed. I’d expect that, if there was a serious problem, it would be detectable when compared to neighbouring stations. If such an event has been identified has it been traced back to see what changed.

    That’s what I’d do if I was checking the validity of the data, not assume that the readings were taken by incompetents who aren’t as good at taking measurements as I am.

  12. d says:

    If the raw data shows cooling, but yet 42% of NOAA stations were stationed “hot” and now have been adjusted still hotter, how can the true temperatures be known?

    https://www.gao.gov/mobile/products/GAO-11-800

    D

  13. Gary_STC says:

    Great work by Tony as usual. The type of supporting data you would expect someone to present to prove their position. Why haven’t we seen it from the AGW folks?
    The data presented by Tony is fully supported by the raw data from our local station. Here in Minnesota the counter by the AGW folks is “yea but our winters are warmer, shorter and not as cold and that proves climate change”. Since 5 of 12 of the warmest winters occurred here since 2000, on the surface, it’s kind of hard to argue against that position. I have not, nor am I qualified, to develop an irrefutable statistical counter such as Tony has done. I have taken an average of the annual mean temps for the years 1900-16 and 2000-16 and the difference is only .6°F. This would seem to say that any warmer cold season temps are being offset by the cooler warm season temps? Tony? Anyone?

    • menicholas says:

      It is well known among climate realists that what is happening is the winter and nighttime are becoming less cold, summers and daytime less hot. etc.
      This is superimposed on any other trends which have occurred.
      At least part of this is likely land use changes…more buildings and pavement, less trees and forests, etc.

  14. Rob says:

    All I see is that temperatures are getting back to where they were in that early 20th century period. There is no long term warming or cooling as it goes up and down.

    I don’t agree at all with saying there is cooling because if there was cooling then the mean line wouldn’t be higher than it was some decades ago. It’s obvious right now there is some warming but it followed a period of cooling. Just like people shouldn’t say there has been continual century long warming, people shouldn’t say there is still cooling because of what happened in the 60s/70s.

    The whole climate debate stuff is just so silly. I don’t know why it is so hard for everyone to just say things fluctuate and there have and will be periods of warming and periods of cooling as the data clearly shows. Right now we are in a period of warming (though not as drastic as the climate fraudsters try and claim it is) but it’s not like it has been warming for 100+ years and history shows there is no certainty it is going to continue.

  15. richard verney says:

    Tony

    This is really good.

    How about following this up with a further sub-set, namely using just the CRN1 category sited stations from the 747 stations. no doubt, it is relatively easy to identify the CRN1 stations from Anthony’s surface station project.

    This should help remove problems with urbanisation.

    I would then suggest 2 further subsets, based upon TOB. Those that have TOB in the morning, and those that have TOB in the afternoon.

    Ome needs to eliminate all excuses for adjustments so that the RAW data is the most appropriate matrix rather than some form of adjusted corruption of the RAW data.

    I

  16. CheshireRed says:

    I wonder how many times Nick has scrolled through and read this post and btl comments. He must be quietly seething. C’mon Nick, man up. Either engage or if TH is right, acknowledge as such.

    • sunsettommy says:

      He told me he has looked at the links to Tony’s post about Nick being exposed.

      My last post WUWT destroyed Nick, who suddenly left the subject. I had convincingly exposed him as a liar,since he kept saying USHCN was obsolete,when Tony had stated several times that it was actually being updated DAILY. Nick kept saying it was obsolete anyway,which is a lie!!

      I posted the link to a file showing that the USHCN database was being updated daily for Precipitation,max/min temperature and T average.

      That is when he shuts up and run away.

      • richard verney says:

        Nick’s point that USHCN had become obsolete in 2012, or 2014 was always a bad point, since what Tony was showing was that the US was warmer in the past (especially around the 1930s) and has cooled since then. It makes no significant difference to the point that temperatures have been cooling, if the cooling runs through to 2012, or to 2014, or to 2016.

        Also the fact that the data set became obsolete in 2012 (or whenever) would make no difference to the fact that the historic data is constantly being changed.

        In short, there was never anything of substance in the point that Nick was making, even had that point been true 9which it is not).

        But Nick is right that there is a problem with the change in the composition of the stations over time. However, this is something that pervades all the time series temperature reconstructions (NOAA, NASA/GISS, and HadCru) and renders the time series reconstructions meaningless. The way these series are compiled, we can never know whether it is warmer today than it was in the late 1930s/1940s, or for that matter in around 1880, simply because the stations that make up the 2016 temperature anomaly are the same stations as made up the 1940s temperature anomaly, which in turn are not the same stations that made up the 1880 anomaly.

        At no point in time are we comparing like with like. we need to make a comparison with point by point measurements, not some homogenized, infilled, kriged, spatially adjusted global construct.

    • Rah says:

      We no that he won’t acknowledge his errors/deceptions or directly engage.

    • Colorado Wellington says:

      They remind us daily that it’s not about science, don’t they?

  17. Rah says:

    Know not no. Damn this phone.

  18. Cheeseburger McFreedomman says:

    WOW this article opened my eyes to the secretive ways of all the reptilians of NASA and their underlings such as Nick!
    Thanks brother! We should watch the next wrestlemania together whilst circlejerking over aliens and their connections to God!
    Did you know that Obama did 9/11? I’m sure you did, brother, it’s so obvious!
    Also last night my dog ate my lube so I’m out for a few weeks, so we’ll have to reschedule this circle jerk to a later date
    I love you babycakes
    8========D

  19. The apple does not fall very far from the tree. Chelsea Clinton is a chip off the old block so it seems likely that she will become a “Leader” in the Democrat party while it spends the next 40 years in the wilderness looking for an idea that will benefit the American people.

    Of course I am assuming that the Republican party will wake up and get behind Donald Trump and his agenda. With Mitch and Paul in charge the GOP may once again snatch defeat from the jaws of victory by betraying their voters yet again.

  20. David M. says:

    Devastating critique of Nick & Zeke. Very understandable. Comprehensive. Methodologically valid. Many thanks.

    Following is a suggestion for another analysis. The suggested analysis should expose a serious flaw in official temperature adjustments, and enhance the 4 part critique of Nick Stokes, et. al. Form a set of temperature stations whose readings have been influenced over time by urban heat island effects. Form a second set where the surroundings today closely resemble the surroundings 50+ years ago. The latter are NOT influenced by changing urban heat island effects. Tony already has formed these two sets, if memory serves correctly.

    Differences between representative values for one set and comparable values for the other set should approximate an appropriate adjustment for the urban heat island effect. For the sake of clarity, representative values could be the means for each set, the median…My expectation is the difference will be OPPOSITE the “Mann-made” distortions of the official temperature estimates.

    Keep up the great work, please.

  21. DR says:

    sunsettommy,
    where is the link to your expose’ of Nick Stokes at WUWT? I’ve always found him to be very disingenuous at the least. Anthony, the gentleman that he is, avoided saying what we all know what Nick is.

    If anyone has the link to tommy’s post please post it.

    • RAH says:

      Anthony told Griff where to get off in a most direct way.

      Griff October 6, 2017 at 12:54 am
      This has no connection with climate or climate science and I am dismayed to see it on this site.
      [Ed, See here’s the thing, and there’s really no way of getting around this – I don’t care what you think. When you get your own site, you can run it as you see fit. In the meantime, tough noogies. – Anthony Watts]

  22. David A says:

    “disingenuous” is a very apt description of what Nick S does every time logic pins him into a corner.

  23. Frank says:

    Several people have recommended I look into your latest work.

    Tony wrote: “Rather than attempting to adjust the temperatures, let’s do a much more rigorous experiment – and simply use a set of stations which haven’t changed. There are 747 US stations which were continuously active over the past century. Examining them, they show exactly the same pattern as the set of all stations.”

    What if your 747 US stations are concentrated in the East and especially the Southeast. Hansen 1999 (Plate A2) showed slight cooling in the Southeast and warming in the Northwest? If there many stations per unit area in the regions of cooling and fewer stations per unit area in regions of warming, your simple average will be biased towards cooling. Unlike you, all other groups weight their average anomalies for a given area by the area when constructing a temperature record.

    When you get a different answer, it doesn’t mean they cheated or corrupted the data. Hansen published a half dozen papers explaining how and WHY the GISS analysis has changed. If you have a superior way of analyzing the data, that could be worth publishing. Is weighting all stations equally superior?

    • tonyheller says:

      Nearly 50% of current USHCN adjusted data is fabricated.

      • Frank says:

        Tony: Since stations are not evenly spaced around the globe or the US, serious analysts calculate an average temperature anomaly for grid cells. Their temperatures are weighted by area. Your simple averaging, which is not weighted by area, could easily produce a trend that is different from theirs. Therefore, the analysis in this post alone provides no justification for allegations of fabrication. (I read a few earlier posts in this series, but don’t claim any mastery of the larger problem – just the faulty argument in this post.)

        To make a crude, but simple, analogy, one group can calculate a mean and a second group a median. These measures of central tendency can differ without either group having done anything wrong or inappropriate. When the scatter in the data is asymmetrically distributed (as with ECS), the median is usually regarded as the superior metric. In that case, it is perfectly sensible for those reporting the median to claim that their result is the best answer, even though neither answer is mathematically incorrect.

        Given the non-uniformity of the spacing of stations, simple averaging certainly appears to be an inferior approach to the problem.

        • spike55 says:

          As you say, surface stations are sparse, irregularly spaced, biased to urban areas, and HIGHLY dubious in quality.

          They are totally UNFIT for the calculation of any sort of global anything …….. except propaganda.

          If you want something that is consistent and evenly spaced, and unbiased by urban warming, use the UAH temperature data.

          It shows NO WARMING in 40 years apart from the 1998 El Nino step and the 2015/16 El Nino transient

          No warming from 1980-1997

        • spike55 says:

          And No warming from 2001-2015

          Note , I have used land data.
          Whole Planet data says exactly the same thing.

          • Frank says:

            Spike: Very clever cherry-picking of starting and ending dates. Using the CURRENT DATA, UAH6, your pre-El Nino trend is 0.76 K/century and 0.19 for your post El Nino period.

            Of course, you have skipped the two El Nino’s in the satellite record. For the whole UAH record, warming is about 1.4 K/century. The warming from an El Nino isn’t permanent. The 97/98 El Nino saw almost 1.0 K of warming in UAH from 3/97 to 1/98 and 1.0 K of cooling back to 3/07. To a certain extent, an El Nino is a slowing of normal upwelling of cold deep ocean water off the West Coast of Equatorial South America and a slowing of the subsistence of warm water in the Eastern Pacific Warm Pool. This change in overturning is associated with a change in the prevailing wind, and leaves the Easter Pacific much warmer. That warmth travels around much of the globe. Once overturning returns to normal, so do SSTs and global temperatures.

            It doesn’t take long for the temperature of the atmosphere to respond to a change in radiation. In the interior of continents, the warmest day of the year typically lags the longest day (ie most irradiation) by a month. Things are somewhat slower along the coast and over the ocean. So the heat left over by an El Nino can be lost from the atmosphere in 6-18 months as it did in 1998. The effect of the 15/16 El Nino ON THE ATMOSPHERE has also dissipated.

          • Gator says:

            I love it when gullible lefties project their own cherrypicking onto the rest of us. Classic case Frank, thanks for the illustration.

  24. Frank says:

    Tony wrote: “Rather than attempting to adjust the temperatures, let’s do a much more rigorous experiment – and simply use a set of stations which haven’t changed. There are 747 US stations which were continuously active over the past century. Examining them, they show exactly the same pattern as the set of all stations.”

    Unfortunately, continuously active doesn’t mean that each station has data for 1200 months. How many reading are there for any given month? 700? 600? 500?

    Let’s hypothesize that stations in the colder part of the country were less likely in the past than in the present to collect the minimum number of daily readings needed to produce a monthly reading when it was really cold. Or, if you are working with daily temperatures, that they missed more cold days in the past than they do now. IIRC many stations today have electronic data recorders. The owner doesn’t have to slog through 2 feet of snow during a blizzard to record the temperature. If fewer winter readings were taken in the past, that will artificially warm the past compared to the present.

    How many stations would need to fail to collect data during the coldest periods to create a serious problem? Let’s assume that the average winter reading is 50 degF for simplicity, which is probably too high. Let’s hypothesize that the average station that missed reporting data during the coldest periods in the past was 0 degF for simplicity. If 1% of those stations didn’t report (perhaps 5 to 7 stations), the average of all reporting stations will increase to 50.5 degF. If 2% of those stations didn’t report, then 51 degF. Now this bias won’t be present all year long, so the annual average won’t rise from 65 to 66 degF. But the impact will not be trivial.

    Others are analyzing the same data by identifying an average temperature anomaly to grid cells spread evenly across the country. If one of five or ten stations drops out during cold periods, they have four or nine nearby stations experiencing similar weather or climate to compensate. So the absence of a small amount of data doesn’t bias their results nearly as much as a simple average does.

    Unless you know which stations are reporting at what time in the year, simple averaging could produce a biased trend.

    • tonyheller says:

      Electronic data recorders in the 1930s?

      • Frank says:

        Tony, let’s not be absurd. Electronic data records enable station operators to record data during extremely cold weather that earlier operators might not have collected by hand. Simple averages can be easily distorted by the absence of extreme data.

        Use some statistics package to create 10,000 values with a given mean and standard deviation. Discard all or half of the data points more than 2 standard deviations below the mean, but none above the mean. What happens to the mean? It changes substantially, even though only a few data points have been lost.

        Since others combine the readings of multiple stations to produce an anomaly a given grid cell, the absence of data from one station because of extreme cold doesn’t interfering with the others accurately reporting an extremely cold cell.

        The limitations of simple averaging mean you can’t use it to prove that more sophisticated analyses are wrong. Do you have any more reliable rational?

        • Gator says:

          More sophisticated analyses

          Yeah Tony, we need more sophistry.

          • Frank says:

            Gator: With many more land stations in the NH than SH, does it make sense to compile a global temperature record by averaging the raw output of every station every month. A global record created by simple averaging would rise about 20 or 30 degF during summer in the NH and fall in the winter. We’d never be able to see the slow 0.2 degF of warming in recent decades in a record containing so much seasonal warming and cooling.

            Thus we need a more sophisticated approach: temperature anomalies. How much warmer was this October compared with the average October in the past? And we can weight that answer by area, so the closely packed US stations don’t dominate a “global” record.

            If my words are sophistry, please provide a sophisticated argument explaining why. I might learn something. Ad hominem’s and strawmen aren’t sophisticated.

          • Gator says:

            Don’t get me started… too late…

            Using “anomalies” to study an insignificant blip of time on Earth, and using this incredibly small set of numbers to understand an almost incomprehensible reality, is simply nonsense.

            a·nom·a·ly əˈnäməlē/ noun
            1. -something that deviates from what is standard, normal, or expected.

            1- There is no such thing as “normal” in climate or weather.

            2- What exactly am I supposed to expect in the future, based upon the range of possibilities we see in the geologic record? Are the changes we see happening “extreme” in any way?

            3- No.

            Anomalies are created by the definers of “normal”.

    • Gator says:

      What if 5% of hot stations refused to record temp data because it was too hot?

      Instead of second guessing the actions of humans, let’s stick to the data

      • Frank says:

        I wan’t guessing the actions of people. I offered two alternative explanations for why Tony’s trend from simple averages could be different from NOAA’s. Until Tony can prove that these aren’t the source of the discrepancy, there is no need to accuse them of scientific fraud.

        Of course, it is politically popular today to level accusations of “fake news” to any information one doesn’t like. However, IMO the problem with climate science is too much politicization, not too little.

        • Gator says:

          I wan’t (sic) guessing the actions of people.

          Huh?

          Let’s hypothesize that stations in the colder part of the country were less likely in the past than in the present to collect the minimum number of daily readings needed to produce a monthly reading when it was really cold. Or, if you are working with daily temperatures, that they missed more cold days in the past than they do now. IIRC many stations today have electronic data recorders. The owner doesn’t have to slog through 2 feet of snow during a blizzard to record the temperature. If fewer winter readings were taken in the past, that will artificially warm the past compared to the present.

          That is exactly what you did.

          Frank, I was a climatology student after spending six years as a geology student who was drawn to the subject by ice ages. I have spent 4 decades closely studying our climate both past and present. Sophistry is what defines post modern climatology, something about which you seem blissfully unaware.

          A global average temperature and temperature anomalies are just 2 examples of #FakeClimate.

          • Frank says:

            Gator: The objective of my comments here was simply to provide SEVERAL alternative hypotheses that could explain why trends based on Tony’s simple average COULD disagree with the trends obtained by NOAA and other groups. The only hypothesis Tony believes is viable is fraud or scientific misconduct, an unlikely scenario given that multiple groups have reported similar warming trends, including BEST (a group of skeptics with funding from the Koch brothers).

            I’m glad to hear that I am conversing with a scientist who understands this subject. I have a PhD in chemistry and AGW has been my hobby for more than a decade. (As it turns out, gross nonsense about the importance of correlation and causation between CO2 and temperature in ice cores in David Archer’s review of AIT at Real Climate prompted me to start reading someone Archer obviously disdained, a guy named McIntyre a few years before Climategate. When a group of influential climate scientists provided a forum for Archer’s nonsense, this contrarian thought wanted to know what McIntyre had to say. When FOIA and Climategate finally liberated most of CRU’s data, a bunch of commenters (Zeke, Roman, Nick Stokes) all independently analyzed that data by different methods and concluded the consensus had arrived at a reasonable answer.

            So when Tony Heller comes along with a trend from simply averaging of all station readings, I know better than to pay attention to such nonsense (and so should you). The only question is whether the information I can fit into a comment provides a convincing refutation. The whole problem (TOB, UHI, breakpoints, changing coverage, homogenization) is far too complicated to deal with convincingly, but it should be possible to dismiss simple averaging as absurd for this data. Based on comments at other blogs, it is possible Tony has uncovered things I don’t know about it. Hopefully I will be directed towards other posts that are more valuable and convincing.

            I think I have a decent idea about the meaning of global average temperature and what it does and doesn’t tell us about how big the radiative imbalance at the TOA is. Chaos limits when we can conclude from temperature dat. Radiative forcing, radiative imbalance and climate sensitivity (or more accurately, the climate feedback parameter) are my main focus.

            When you take a derivative to obtain a trend, constant terms drop out. So I don’t see anything wrong with the use of temperature anomalies when evaluating warming. To some extent, global warming is the sum of local warming rates, to a first approximation weighted by area. (A second approximation would involve the number of people involved and agriculture impacted.)

          • tonyheller says:

            BEST are definitely not skeptics. Muller, Zach and Mosher are three of the worst climate frauds out there.

            https://realclimatescience.com/2017/12/six-months-since-richard-muller-converted-president-trump/

          • Gator says:

            a bunch of commenters (Zeke, Roman, Nick Stokes) all independently analyzed that data by different methods and concluded the consensus had arrived at a reasonable answer.

            So Frank must also believe that all Baptists come to the same conclusion based upon independent studies. Interesting.

            Frank, you can get any answer you want, if you torture the data until it confesses. And that is exactly what grantologists and alarmists have done.

            I will side with actual data.

          • spike55 says:

            I once asked Nick to verify the QUALITY of six of the climate stations contributing to the WHOLE of Africa’s climate. He was totally incapable of finding even one of the. Quite pathetic really.

            I found 2 of them, one on a roof in the middle of an expanding city, about 3m from a flue.

            The other was right next to an airport runway.

            Is that the sort of data YOU would have accepted in your PhD? And would you then smear the temperatures recorded by them across millions of square kilometres? and pretend it had any meaning ?

            Sorry, but the surface data is meaningless load of CRAP. Its only purpose is that it allows the climate glitterati to fabricate a load of continual propaganda.

            So sad you actually believe Muller was actually a sceptic, rather than a con-man. Even conned Koch for one years. His daughter was always a rabid CO2-hater. Never asked who their major “anonymous” supporter is?

          • Frank says:

            Tony: As you know, the people at BEST were skeptics when they started. Based on the cherry-picking, incorrect analyses, withholding of raw data, and other problems that pervaded much work on reconstructing temperature from proxies, they wanted to take a totally independent look at land surface temperature. McIntyre had already found a few mistakes there. The longest records were from cities and airports, where UHI was a potential problem, so they included many shorter records from rural stations and did an analysis based only on rural stations.

            The big flaw in homogenization and BEST’s approach is that no one knows what causes breakpoints in the data, aside from a few documented station moves (and TOB in the US). Moving a station from an increasingly urbanized location to a nearby park RESTORES earlier measurement conditions. Correcting the resting breakpoint introduces a bias that had been corrected by the breakpoint.

            When I look a BEST’s final output for individual stations has records broken up into a dozen individual segments aligned onto one “regional expectation”, it is obvious that those dozen segments could be aligned almost as well onto a “regional expectation” with half the trend, twice the trend, or no trend. Producing a regional expectation by kriging presumably appropriately averages the trend in all the nearby segments. However, why should I trust the average of all of these trends (slopes), when the y-intercepts have so many problems. Without even a decent hypothesis for the cause of so many breakpoints, it isn’t obvious to me that the problem isn’t bias in the trend being correct by maintenance. IMO, breakpoint correction should be included in the confidence interval for the trend, not assumed to be systematically correct.

          • Gator says:

            Frank, the fact that you fell for the BS about BEST being a skeptic lead endeavor tells us all we need to know about you. You are a gullible know nothing.

  25. Frank says:

    The potential drawbacks of averaging all stations becomes apparent if you contemplate construction a global temperature record in this many. The vast majority of land stations are in the Northern Hemisphere. A global temperature obtained by simply averaging all stations will rise about 30 degC every summer in the NH and fall during winter.

    Since averaging all stations would be insane for a global record, climate scientists are going to analyze the global record and the US record by similar methods which don’t have the potential biases of simple averaging. The fact that they see different trends than you do can be attributed to different methods of analysis, not fraud. The appropriate question is: Which method is most appropriate? Simplicity and transparency have their merits – but they also come with liabilities.

    • Gator says:

      methods which don’t have the potential biases of simple averaging

      So you prefer other biases? Sounds fishy to me when anyone chooses to throw out one bias for another.

    • spike55 says:

      es . using a whole heap of URBAN and AIRPORT records to fabricate a “global average” certain would be incredibly STUPID.

      They probably apply to some 1% or less of the land surface.

      There is absolutely ZERO signature of warming by atmospheric CO2 in the only data worth even a pinch, the satellite temperature series of UAH.

      • Frank says:

        Spike55 writes: “There is absolutely ZERO signature of warming by atmospheric CO2 in the only data worth even a pinch, the satellite temperature series of UAH.”

        Your info is a little out of data there Spike. The trend for UAH6 (lower troposphere is +0.13 K/decade since 1979 with a 95% confidence interval of +/- 0.04 K/decade. RSS now gets 0.20 K/decade from the same data. The problem is that recently discovered orbital drift means that the satellites haven’t been sensing temperature over the same location at the same time on every orbit. UAH chose to correct their output so it agreed with the results from radiosondes – meaning that their answer is no more reliable than radiosondes. If you think it is hard to measure temperature accurately 2 meters above the ground behind a Stevenson screen at a fixed location, consider the challenges of doing so with a radiosonde in full sunlight rising through an atmosphere whose temperature is dropping 0.65 degC with every 100 meters of rise.

        Or is “the only data worth considering”, the data that agrees with your preconceptions?

        • spike55 says:

          YOU IDIOT.

          I tell you that the ONLY warming came from El Nino events.. and then you use those events to calculate the only warming

          Are you REALLY THAT DUMB !!!

          The surface data is a LOAD OF ERROR RIDDEN fabricated and infill CRAP.

          End of story.

          RSSv$ is yet another farce, and actually uses “climate models” to justify the use of highly dubious ocean data.

          Why are you IGNORANT of these things, fronk?

          If you really can’t see that, then you grasp of reality must be being influence by either funding or magic mushrooms

          • Frank says:

            Spike55 writes: “I tell you that the ONLY warming came from El Nino events.. and then you use those events to calculate the only warming.”

            El Nino events have nothing to do with the GW that is allegedly due to rising GHGs. Hopefully you will have the patience to understand why.

            Every year, 10-20 K of warming caused by seasonal changes in SWR comes to the NH in summer and escapes to space in winter. Heat in the atmosphere and mixed layer of the ocean doesn’t stick around for decades!

            The warmth from the 97/98 and 15/16 El Ninos doesn’t stick around either. It could be 0.4 K warmer today than in the 1990s because El Nino conditions are PERSISTING, but not because the heat El Nino released still remains in the climate system. If that were the case, winter would not follow summer! Absurd!

            El Ninos are characterized by unusual warmth in the Eastern Equatorial Pacific. That usually peaks and falls within a year, and during that time some of that heat is transmitted to the rest of the planet through the atmosphere. That is NOT happening today. El Ninos are not responsible for the fact that it is 0.9 K higher today than a half century ago. The average trend over half a century was +0.18 K/decade. That does include several six-month periods when the trend was about +5 K/decade quickly followed by -5 K/decade. The total change is still +0.9 K.

          • spike55 says:

            “El Nino events have nothing to do with the GW”

            Yet El Ninos are the cause of ALL the calculated warming trend in the last 40 years

            Stop DENYING the facts.

            Thank for agreeing with me that there is absolutely NO CO2 WARMING SIGNAL in the whole of the satellite temperature data.

          • spike55 says:

            Without the El Ninos, the trend is ZERO.

            Been like that for 33 of the last 40 years.

            Why do you insist on using the El Ninos to calculate a trend, and then saying the trend is not because of them.. That’s just DUMB.

            There is absolutely no CO2 warming signature in the satellite record.

            There is absolutely ZERO evidence for CO2 warming anything, anywhere, anytime.

            We are still waiting for you to produce that evidence, instead of your continued brain-washed yapping.

        • spike55 says:

          Come of fronk.

          Show us one piece of empirical science that shows that atmospheric CO2 warming anything anywhere.

          Or do you just “believe” despite the total lack of evidence.

          I repeat, because you seem to be incapable of basic comprehension

          “There is absolutely ZERO signature of warming by atmospheric CO2 in the only data worth even a pinch, the satellite temperature series of UAH.”

          The only warming has come from El Nino ocean events which cannot possibly be caused by human anything.

          Even RSSv$ shows NO WARMING rom 1980-1997

        • spike55 says:

          And no warming from 2001-2015

          So ONLY warming is from the El Nino events

          NOTHING TO DO WITH ATMOSPHERIC CO2

          Or is that FACT incapable of breaking through you deep-seated brain-hosing !!

    • spike55 says:

      And sorry, but the addition of the un-justified TOBs fallacy IS FRAUD

      It accounts for basically ALL of the fabricated warming.. a massive huge FABRICATED adjustment.

      Like any other sort of warming except from El Nino events, TOBs is based on manufactured statistical mal-practice, and has been shown to have very little relationship to reality.

      • Frank says:

        spike55 wrote: ” the addition of the un-justified TOBs fallacy IS FRAUD”

        You wouldn’t say so if you downloaded some hourly temperature data covering a few months, and asked EXCEL to identify the high and lowest temperatures for the previous 24 hours every day at 5:00 pm and separately at 7:00 am. This simulates what happens when you read (at those times) the min/max thermometers that provided the bulk of our land temperature data. It took me about an hour to see why changing time of observation bias is important.

        The NOAA paper addressing time of observation bias is one of the few climate science papers I think is excellent. They collected year-round hourly temperature data from a hundred sites around the country. They used half of the data to characterize the problem and devise correction algorithms. Then they tested the accuracy of their algorithms on the other half of the data to establish how well they worked. First-rate.

        On the other hand, homogenization of undocumented break point is a dubious proposition since no one knows why they occur. IMO, a breakpoint could represent a change to new measurement condition (should be corrected) or a gradually increasing bias (UHI, for example) that is eliminated by maintenance (and shouldn’t be corrected because it restored earlier observation conditions).

        • Gator says:

          We have seen TOBS analysis, like below, and found TOBS to be unnecessary and possibly even fraudulent.

          https://realclimatescience.com/?s=TOBS

        • spike55 says:

          Poor fronk…

          Sorry you are INCAPABLE of comprehending that TOBs DOES NOT require massive adjustments that account for basically ALL of the warming in the surface data.

          I can’t help it if you are incapable of following basic scientific data as Gator has shown TH to have done

          There is NO ACCOUNTING for your ignorance and brain-washing.

          You just “believe” anything these charlatans tell you, don’t you, fronk.

        • spike55 says:

          TOBS adjustment is NOT necessary except for fabrication purposes.

          But you KNOW that don’t you, Fronk

          Zero difference in trends between morning and afternoon readings

          DATA and FACTS, fronk.

          Try it one day.

          And please stop trying to use NATURAL solar powered El Nino events to show CO2 warming.

          It makes you look like a complete brain-washed fool.

          • Frank says:

            Spike55 says: “TOBS adjustment is NOT necessary except for fabrication purposes.”

            Spike: Please find on the web some hourly temperature data from a period several months long from a single station. You pick the station. The min/max thermometers used in stations record the highest and lower temperature since they were last reset. Normally, the operator records the highest and lowest temperatures once every day usually in the morning or evening and then resets the thermometer. I assert that the monthly average temperature (the daily average of the high and low between readings) will be different if one uses 24 hour periods beginning at 7:00 am or 5:00 pm to identify the high and low for each day. It should be easy to prove I’m wrong. I’d like to say: “Put up or shut up”, but I not the owner of this blog.

            (I’m not talking about data homogenization to eliminate breakpoints. I’m talking about the log book for a station saying in writing that the operator stopped recording temperature in the late afternoon and started recording in the morning.)

            Natural solar powered summers are followed by winters. I don’t see why the “natural solar powered El Nino” in 97/98 is still keeping the planet warm today.

            Of course, El Ninos are not really “powered” by the sun. To oversimplify, they are caused by a chaotic reduction in the upwelling of cold water from the deep ocean off the coast of Equatorial South America that normally makes the Eastern Equatorial Pacific about 6 K cooling that the much cloudier and rainier Western Equatorial Pacific. The extra warmth in the Eastern Equatorial Pacific is transmitted to the rest of the planet via the atmosphere.

          • spike55 says:

            Energy for El Ninos comes from the SUN, like nearly all energy on the planet
            NOTHING to do with atmospheric CO2

            TOBs is PROVEN to be an unnecessary, probably fraudulent anti-science adjustment

            Sorry you are INCAPABLE of accepting that fact..

            Your comments show that you don’t actually comprehend what TOBs really is.

            If the changeover happened, it would be a single step change at that single site, not a continual adjustment process that is applied to the whole once-was-data..

            And as the graph above shows, the trend remains the same, where as the TOBs adjustment causes nearly all fabricated trend in the record.

            It is an anti-scientific LIE.

            and I suspect you know that.

            PAID ???

Leave a Reply

Your email address will not be published. Required fields are marked *