Zeke : Baffling With His Bull

Zeke Hausfather is always trying to discredit my graphs showing NOAA data tampering, with one line of BS or another. One of his favorites is claiming that “changing station composition” causes a cooling bias to the US temperature record.

Where did Goddard go wrong?

Goddard made two major errors in his analysis, which produced results showing a large bias due to infilling that doesn’t really exist. First, he is simply averaging absolute temperatures rather than using anomalies.

Absolute temperatures work fine if and only if the composition of the station network remains unchanged over time.

  • Zeke Hausfather

The Blackboard » How not to calculate temperature

It is simple enough to test his theory, by using only the stable set of 720 stations which have been continually active since 1920.

The trend for the stable set of stations above, is almost identical to the trend for all 1218 stations below.

The difference in the trends between the two groups of stations is very minor, and no matter how Zeke tries to distract attention with his BS – the US used to be much hotter and the NOAA data tampering is fraudulent.

This entry was posted in Uncategorized. Bookmark the permalink.

21 Responses to Zeke : Baffling With His Bull

  1. richard verney says:

    When I have argued with Nick Stokes, I frequently point out that one must always keep the composition of the station network … unchanged over time. This is where Climate Science goes wrong.

    Some time ago, you posted charts using the stable 720 stations. In my opinion this is the approach that you should adopt since it overcomes the argument raised by Zeke Hausfather, although it is interesting that there is not much difference in trend between the set consisting of the 720 stable stations, and the set based upon the entire 1218 station network.

    Whilst there is not much difference in such trend as far as the US is concerned, this is no doubt because the US has the best records and sampling. I envisage that there is a substantial difference when one takes these different approaches to the globe due to the poor spatial coverage, inadequate sampling, poor historic records, and where infilling becomes a much greater issue etc.

  2. richard verney says:

    I note that you only set out one of Zeke’s two criticisms, and hence addressed only one of his arguments as to why your approach is wrong. I consider that you have demonstrated that there is not much merit in the one point that Zeke was arguing as far as US temperature trends are concerned.

    Accordingly, I had a quick look at the link you provided, and I noted that Zeke has only one point when it comes to US temperature records/trends. I note that Zeke’s other complaint is:

    His [Goddard’s] second error is to not use any form of spatial weighting (e.g. gridding) when combining station records. While the USHCN network is fairly well distributed across the U.S., its not perfectly so, and some areas of the country have considerably more stations than others. Not gridding also can exacerbate the effect of station drop-out when the stations that drop out are not randomly distributed. (my emphasis)

    and this argument carries little weight given the extent of sampling in the US, it may not be perfectly equally distributed but it is reasonably so to draw some worthwhile conclusions. This second point made by Zeke is, of course, a material issue when one looks at the position globally, rather than being confined to the US.

    Further, whilst this may not be definitive to addressing Zeke’s concerns, the mere fact that there were considerably more hot days in the past than there are today also strongly supports the view that the US was warmer in the past, and has cooled somewhat over the course of the last 80 or so years.

    • Mark Fife says:

      I have run into that spatial averaging business as well. In fact, I was told I needed to “area weight” my averages and engage in “spatial statistics” if I wanted to construct an annual estimated temperature average for the planet.

      Only then could I determine how much energy was in my “bucket”. Or maybe that was balance. But this is all horse apples. I don’t give flying toot. I am not trying to determine how much energy is being radiated by the surface. Or whatever.

      The way to combat people who are trying really hard to baffle the world with bull$#^! is to properly define the scope of the data in question and what it means.

      We are dealing with surface temperature records only, which are defined to be measured in a single location under determined conditions. A land temperature monitoring station measures the temperature of the air in a vented box at a particular location at a set height above the ground. It’s absolute temperature reflects only the temperature inside the box. That temperature is a proxy for the surrounding grounds of the monitoring station. The thermometer inside the box doesn’t actually measure the temperature of the air inside the box either. A thermometer measures the temperature of a temperature reactive media inside the thermometer. Thus, the temperature reactive media is a proxy for the temperature inside the box.

      Therefore, it is by no means certain the temperature reactive media inside a thermometer inside a vented box accurately reflects the average temperature of every object, surface, or cubic volume of air contained within the area of the monitoring site. The reality is it does not. We merely assume there is a correlation.

      It is absolutely certain the temperature measured within a vented box within a monitoring site does not accurately reflect temperatures at nearby locations. Temperatures in similar locations nearby can and do vary by varying amounts.

      With respect to the energy budget contained within the surface of the planet’s land mass, these temperature readings would represent at best a gross estimate of just the monitoring site itself.

      So no, we are not establishing any sort of an accurate average temperature for any surface area. We are looking at specific, discrete locations defined by the location and dimensions of a bunch of vented boxes. That is literally all we know.

      As such, an area weighted average or a spatially weighted average makes no sense at all.

      So go on, pull the other one.

      • cdquarles says:

        Exactly right. The thermodynamic temperature is, itself, the geometric mean estimate of a defined sample of matter’s kinetic energy and only its kinetic energy. So, tell me, just what an average of averages means when you don’t include the range or some other measure of dispersion *and* do not do a proper error analysis and propagation? /rhetorical

  3. Gator says:

    Anomalies are crap.

    • Mark Fife says:

      From what I have seen what they mean by anomalies is just subtracting the 1951 to 1980 average from everything. Which doesn’t do anything to the actual data. other than change the zero point. What possible inference can be made from subtracting the average temperature of Cuba from 1950 to 1980 from the average temperature of Canada in 2015? In effect, that is what they are doing.

      I would ask anomalies from what? Define how you calculate an anomaly and then we can see.

      I promise you, they won’t do that. To define the bull$#^% is to expose the bull$#^%.

      • MrZ says:

        If it helps Mark they actually use individual anomalies per station.
        The purpose is to focus on a stations delta changes. Idea is that stations close to each other (same grid cell) will have the same delta pattern even though one can have 50F base and the other 60F. If one of the two stops reporting the delta provided by the remaining is still ok. Loosing one is far more dramatic if you rely on averages
        However, because USHCN have so many stations the anomaly method gives very little difference. The averageing error will random out, sometimes you loose a warmer, sometimes a colder.

        My 25 cent is that anomaly is good to build the temp series. It is however misleading when presenting because a 90-100F change has the same appearance as a 0-10F increase.

        • Mark Fife says:

          I think you are wrong about that. I downloaded the raw, unadjusted data from the Berkeley Earth site and crunch their data. I reproduced their published graph on average temperatures precisely by merely averaging all the data. The only difference is they subtracted the 1951 – 1980 average to set that to zero. That is obviously total crap.

          I am sorry. The adjustments they made were in the choice of data. In the “hot” years they added the same station data for the same year as many as 99 times.

          Now, as for what you are saying, that is part of what I am doing in my latest look at the data.

          The other part is looking at how you would go about constructing a temperature record from a series of stations reporting at different times and in different locations. The one thing I have determined is you must build such a record location by location. For example I have made such a reconstruction of Greenland from 1897 to 2011. It accurately reflects the average magnitude of change for each station I included in the record. The only piece I have not worked out is the uncertainty factor. How well can I expect my model to reflect other locations in Greenland.

          See my blog post on this.

          http://bubbaspossumranch.blogspot.com/2017/07/2016-was-hottest-year-evah-or-was-it.html

    • Louis Hooffstetter says:

      “Anomalies are crap.” Agreed!

      When your kids are sick, do you take their temperature and worry about what the anomaly is, or do you worry what the actual temperature is?

      There is absolutely nothing wrong with actual temperatures. Anomalies are smoke and mirrors.

  4. MrZ says:

    This is the graph Zeke “forgot” to include in his discussion.
    Yellow is GHCNM raw data anomalies (base 1961-1990) for the USHCN stations gridded and infilled. This method that Zeke recommends re-assembles Tony’s first graph above perfectly.
    Magenta is the GHCNM adjusted data for the same stations also gridded and infilled.
    Conclusion
    – US average max temperatures are only warming if you trust the adjustments

  5. arn says:

    Aren’t anomalies by definition “rare” happenings,
    so barely usefull to define an average???

    • Gator says:

      No.

      a·nom·a·ly əˈnäməlē/ noun
      1. something that deviates from what is standard, normal, or expected.

      I guess for those that don’t get the big picture, anomalies make sense. But I started as a geology student, and ended up as a climatology student. So my perspective is a bit larger than most “climate experts”, who seem to think that the tail end of this interglacial is somehow the standard by which all climates should be compared. What I learned through my education is that there is no “normal” in climate or weather, and IMHO, claims that there is a “normal” climate is simply ignorance, or worse…

      • AndyG55 says:

        They are always stumped by the question,

        “What SHOULD the global average temperature concoction be?”

        And why ?

        We are definitely only just a small amount above the COLDEST period in 10,000 years. How is that possible “ideal”?

        Arctic sea ice , still in top 10% of the last 10,000 years, and WAY above basically any time the MWP or before. How is that “ideal”?

      • MrZ says:

        It is especially bad when they use anomalies to explain why our interpretation is wrong.
        Like I showed above Zeke (or his staff) could have easily created that same graph. Instead he argues with nonsense that simply does not apply to as a solid series as USHCN.

      • Louis Hooffstetter says:

        “What I learned through my education is that there is no “normal” in climate or weather…”

        Yep. My geology professors taught us that “The only thing constant about the Earth is change.”

  6. mickey says:

    “Zeke Hausfather is always trying to discredited[sic] my graphs…”

    I think you mean “discredit”.

  7. Louis Hooffstetter says:

    “What I learned through my education is that there is no “normal” in climate or weather…”

    Yep. My geology professors taught us that “The only thing constant about the Earth is change.”

  8. Anyone, even Zeke the Sneak, can use Google news to select a newspaper that has been in business most of the interval in question and plot its daily temps. I’d bet money the graph of reported and recorded temperatures will look like Tony’s charts.
    I stumbled upon sth of Zekes purporting to show that temperature cannot be measured because it varied with the diameters of some goofy tubes in a contrived experiment to measure temperature any way BUT with thermometers that gave us the existing record. The vertical axis revealed that the exercise in pedantry was exercised over a quarter degree Celsius. This is taking self-deception and magnifying it into mental illness.

  9. Louis Hooffstetter says:

    I made this comment back in 2014 in response to Zeke’s post:
    “Engineers who work with data realize:
    Empirical measurements trump synthetic data (infilled data, interpolated data, extrapolated data, etc.) Synthetic data is only used as a last resort. You always use empirical measurements when you have them. The only reason anyone uses synthetic data rather than empirical data is to cheat.

    Empirical measurements are sacrosanct. You don’t adjust them or tweak them; that’s cheating. Adjusted data is synthetic data.

    In regards to this, Anthony Watts asked Zeke these simple questions:
    What is the CONUS average temperature for July 1936 today?
    What was it a year ago?
    What was it ten years ago? Twenty years ago?
    What was it in late 1936, when all the data had been first compiled?

    We already know the answers to questions 1 and 2…, they are 76.43°F and 77.4°F respectively, so Zeke really only needs to answer questions 3 and 4.”
    Of course, Zeke never answered the questions.

Leave a Reply

Your email address will not be published. Required fields are marked *