I tested out my theory, and removed every other station when calculating the US temperature. It has almost no impact on either the yearly temperature or the trend.
Disrupting the Borg is expensive and time consuming!
Google Search
-
Recent Posts
- Fifteen Year Old Children In Australia Control The Weather
- Mission Accomplished
- Both High And Low Sea Ice Extent Caused By Global Warming
- Record Sea Ice Caused By Global Warming
- “Rapid Antarctic sea ice loss is causing severe storms”
- “pushing nature past its limits”
- Compassion For Terrorists
- Fifteen Days To Slow The Spread
- Maldives Underwater By 2050
- Woke Grok
- Grok Explains Gender
- Humans Like Warmer Climates
- Homophobic Greenhouse Gases
- Grok Explains The Effects Of CO2
- Ice-Free Arctic By 2027
- Red Hot Australia
- EPA : 17.5 Degrees Warming By 2050
- “Winter temperatures colder than last ice age
- Big Oil Saved The Whales
- Guardian 100% Inheritance Tax
- Kerry, Blinken, Hillary And Jefferson
- “Climate Change Indicators: Heat Waves”
- Combating Bad Weather With Green Energy
- Flooding Mar-a-Lago
- Ice-Free Arctic By 2020
Recent Comments
- Gordon Vigurs on Fifteen Year Old Children In Australia Control The Weather
- Disillusioned on Both High And Low Sea Ice Extent Caused By Global Warming
- Disillusioned on “pushing nature past its limits”
- Francis Barnett on “pushing nature past its limits”
- Disillusioned on Mission Accomplished
- conrad ziefle on Mission Accomplished
- conrad ziefle on Mission Accomplished
- Billyjack on Mission Accomplished
- conrad ziefle on Both High And Low Sea Ice Extent Caused By Global Warming
- conrad ziefle on “pushing nature past its limits”
Yes, that’s the reason not so many stations are needed for good (uninfluenced by local effects) global (or US or any other geographic region) temperature indices. Just use the best of the best stations (pristine, far from any human impact, unmoved…). Globally, even 50 or so excellent stations is much better than 1000s of stations with the bad stations included. No need for any tampering of raw data. Just plot them all, spaghetti style.
You are obviously an amateur and have no chance of being hired at ncdc. Can’t even cook the books using “Data Selection”. Yeesh.
The first requirement is to select stations that show a good increase in temperature over time due to UHI or by accident of geography. The second step is to discredit the rejected sites for any real or fabricated reason available. After this your task is complete and the funding will continue to flow.
Although it should not have to be said, removing or adding stations will absolutely change the trends if the stations are systematically chosen to do so. As E.M. Smith showed some years back, instead of removing stations in an effectively random way (as you did), the Powers That Be have removed and added stations non-randomly. They have dropped stations in higher, cooler locations and then backfilled the same areas with data interpolated from lower, warmer areas. They have done pretty much the same thing with their so-called “adjustments”, though in the case of adjustments, they have distributed the changes in a systematic, non-random pattern based primarily on time instead of on location.
I only repeat the obvious because there will certainly be some warmist somewhere who says, “See! Don’t complain because we dropped stations. Goddard has shown that removing even half of the stations has no effect!” A few years ago, I actually did read where the CAGW folk claimed that adjustments made no difference “because they all sum up to zero” while never considering that their distribution in time was not random.
+1
Steven Mosher of BEST fame claimed over at Climate etc a few weeks ago that ‘a couple hundred’ stations over the globe is enough to nail the temperature trend. That amounts to about 5 stations in the continental US. It would be fun to see a spaghetti plot of the ‘nailed’ US temperature trend using a variety of hand-picked or randomly chosen temperature station pentads. My guess is that this ‘Pick 5’ approach using the best surface station record in the Universe will lead to the conclusion ‘results can be cherry-picked however you want.’
There needs to be a convincing, repeatable way that anyone can see that there is zero chance the record has not been purposely manipulated to be warmer. This method is one to show that randomly removed stations still produce an average with very tight confidence intervals. A little app that allows a person to run this themselves would demonstrate that the raw data bears no resemblance to the cooked data.
So how do all potential “natural” (i.e., random) causes (like elimination of stations, as long as it is done in a random way) get eliminated, one by one, to leave the one glaring cause left, that the temperature increase really is man made, by men in New York City?
It’s a big project, and Steve demonstrates over and over again how many ways you can look at it and can show obvious problems. But how do you PROVE beyond ANY doubt that there is fraud going on? Because once you do that, you really could kill this CAGW thing dead. It would be the scandal of the century, and skeptics have been saying for over a decade that there is fraud going on. Right now it seems obvious to many of us, but there needs to be a way to convincingly let the public run the damn numbers themselves, applying whichever peer reviewed adjustments they want, and allow them to see that you can’t “get there from here”. You can’t get to the inflated numbers using raw data and the “proper” adjustments. Once you could do that, the only conclusion left is the deliberate destruction of the property of US citizens for political purposes.
Once we get a real Attorney General and DOJ, we could get people prosecuted for such crimes, and you’d have legions of volunteer statisticians, engineers and so on volunteering to do the work demonstrating the fraud. I’d love to help in an effort like that, but it just seems to big for me to take on, and most of my efforts to work on this stuff have failed due to choosing the wrong platform (excel + thousands of lines of VBA code to process it). It has never run for more than 7 hours without crashing. Ideas?
Tough to say given the spatial challenges, but the standard deviation of the mean will vary with the inverse of the square root of n. So if I have 1 station and a standard deviation of 1°F as my estimate of the true average, if I increase that to 100 stations (from the same exact same population, a situation we don’t have), then n increased by 100x, so sigma will be decreased by 100^.5 or 10x. Our sigma xbar will now be 0.1F. Now if we get 10,000 stations, same thing, and sigma xbar goes to 0.01°F. We will know the range of the “true” mean (which is unknowable) to a much better estimate simply because we have measured so much more. So throwing away 50% of the data will increase your standard deviation of the mean by 1/(2)^.5 or 1.414x
The standard deviation of the mean will vary with the inverse of the square root of n is true, but only if the ‘noise’ is Gaussian white noise. This does apply for unipolar biases like UHI. I am not sure how it would apply to the common situation where station locations are changed, revamped with new equipment, etc.
I think we agree that 5 stations is probably not sufficient to give an accurate temperature trend for the US.
Anyways, I think it would be great to calculate 1000 USHCN temperature trends using 1000 randomly chosen temperature station groups of 5 (NOAA says their stations are all awesome, so what’s not to like?).
Another interesting experiment would be to rigorously apply MBH98 weighting techniques to the US stations, using the following ‘calibration’ curves: flat line, exponential increasing, exponential decreasing, sine wave, square wave.
Oops- “This does apply for unipolar biases like UHI.”
Should read- “This does not apply for unipolar biases like UHI.”
[ the above was to Chris Y ]
Depends on which half you remove…
“Using data downloaded from NASA GISS and picking rural sites near, but not too near, to urban sites, a comparison has been made of the temperature trend over time of the rural sites compared to those of the urban sites. 28 pairs of sites across the U.S. were compared. The paired rural site is from 31 to 91 km from the urban site in each pair. The result is that urban and rural sites were similar in 1900, with the urban sites slightly higher. The urban sites have shown an increase in temperatures since then. The rural sites show no such temperature increase and appear to be generally unchanging with only ups and downs localized in time. Over a 111 year time span, the urban sites temperatures have risen to be about 1.5C warmer than the rural sites. So, the much touted rising temperatures in the U.S. are due to the urban heat island effect and not due to a global warming such as has been proposed to be caused by human emissions of CO2 due to the combustion of fossil fuels.”
http://objectivistindividualist.blogspot.com/2009/12/rural-us-sites-show-no-temperature.html?m=1