The Midwest Heatwave Of July, 1936

July, 1936 was about 15 degrees hotter than July, 2017 in the Midwest.  The animation below shows the daily maximum temperatures at all Midwest USHCN stations during both years.

This entry was posted in Uncategorized. Bookmark the permalink.

8 Responses to The Midwest Heatwave Of July, 1936

  1. MrZ says:

    Hi Tony!
    This looks promising. I don’t fully get what is red and what is orange? Shouldn’t red be on top?

    Two areas of critique from the other side are:
    -He is not “gridding” the data. Fact: Minimal impact for USHCN!
    and
    -He is not using anomalies. Fact: Real temps are clearer! but still something to claim for the laymen…

    One idea to close both gaps, but still producing clear graphs, could be to look at a stations average temp (base temp) and then see how anomalies fluctuates between years for individual base temps.
    I did that in the graph the Scott, who turned out to be a “graph dyslectic”, the other day and I repeat it below.

    What I have done is:
    – Calculate the average Max temp for every station during its lifetime (1900-2017) for every month, i.e. 12 base temperatures per station
    – Grouping stations by the different base temperatures
    – Calculate average anomaly for every base temperature from stations that shares same base temp.
    – Comparing unadjusted and adjusted data.

    Below you see USHCN 1930-1939 as gray. 1934 is read and 2012 is blue. Dotted is NOAA adjusted.
    To me, and I am biased here, this clearly shows how 2012 could become the “hottest” year evvehh. It was simply because of a milder winter. If you instead compare 20-35C it is a totally different story… (Sorry for the center grades but you get it). If you look at the high 24-32 gray span that is 1936 (should have colored it). 1936 failed the hottest list because of a very cold spring (again gray far below 0 in the beginning).

    I argue that stations with similar base temperatures should react similarly to sun, back radiation and what have you. This way of viewing things also makes it very clear that impact is NOT linear. Colder stations are more easily heated than warmer ones. (This is actually a no brainer but effectively hidden with the common anomaly methods).

    Even if you don’t want to use this, is this a good idea???

    Thanks

    • tonyheller says:

      Gridding is meaningless for station data, which is located at a single point. Gridding only comes into play when climate scientists want to obfuscate their fraudulent adjustments.

    • Colorado Wellington says:

      Gridding is the fundamental method of modern climate science. Here is how it’s been done at the Climate Research Unit of Dr. Phil Jones of the University of East Anglia:

      The IDL gridding program calculates whether or not a station contributes to a cell, using.. graphics. Yes, it plots the station sphere of influence then checks for the colour white in the output. So there is no guarantee that the station number files, which are produced *independently* by anomdtb, will reflect what actually happened!!

      Well I’ve just spent 24 hours trying to get Great Circle Distance calculations working in Fortran, with precisely no success. I’ve tried the simple method (as used in Tim O’s geodist.pro, and the more complex and accurate method found elsewhere (wiki and other places). Neither give me results that are anything near reality. FFS.

      Worked out an algorithm from scratch. It seems to give better answers than the others, so we’ll go with that.

      The problem is, really, the huge numbers of cells potentially involved in one station, particularly at high latitudes.

      out of malicious interest, I dumped the first station’s coverage to a text file and counted up how many cells it ‘influenced’. The station was at 10.6E, 61.0N.

      The total number of cells covered was a staggering 476!
      —–
      Back to the gridding. I am seriously worried that our flagship gridded data product is produced by Delaunay triangulation – apparently linear as well.

      As far as I can see, this renders the station counts totally meaningless.

      It also means that we cannot say exactly how the gridded data is arrived at from a statistical perspective – since we’re using an off-the-shelf product that isn’t documented sufficiently to say that.

      Why this wasn’t coded up in Fortran I don’t know – time pressures perhaps? Was too much effort expended on homogenisation, that there wasn’t enough time to write a gridding procedure? Of course, it’s too late for me to fix it too.

      Meh.0

      More here, for anyone with a strong stomach.

      • RAH says:

        Gridding can be a useful even necessary tool but in the wrong unethical hands has exactly the same problem as gerrymandering political voting zones. There just is not substitute for integrity and when one gets down to it that is what the fundamentals of science and statistics require of those that create them to be useful tools for gaining knowledge and understanding.

  2. MrZ says:

    A few hours has passed.
    Now anybody can respond. If idiotic I want to know…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.