Global Warming Alarmists Promote XKCD Time Series Cartoon, Ignore Its Mistakes

Not everything is as it seems

By William M Briggs Published on September 16, 2016

The popular web cartoon xkcd has provided a wonderful opportunity to plug my must-read (and too expensive) book Uncertainty: The Soul of Modeling, Probability & Statistics. Buy a copy and follow along.

In this award-eligible book, which has the potential to be read by millions and which has the power to change more lives than even the Atkins Diet, I detail (in the ultimate chapter) the common errors made in time series analysis. Time series are the kind of data you see in, for example, temperature or stock price plots through time.

The xkcd post (thanks, by the way, to all the many readers who emailed about it) entitled “A Timeline Of Earth’s Average Temperature” makes a slew of fun errors, but — and I want to emphasize this — it isn’t xkcd’s fault. The picture he shows is the result of the way temperature and proxy data are handled by most of the climatological community. Mr. Munroe, the xkcd cartoonist, is repeating what he has learned from experts, and repeating things from experts when you yourself don’t know the subject is a rational thing to do.

The plot purportedly shows the average global temperature, presumably measured right above the surface, beginning in 20,000 BC and ending in the future at 2100 AD. It also shows the temperature rising anomalously in recent decades and soaring into the aether in the future. Now I’m going to show exactly why xkcd’s plot fails, but to do so is hard work, so first a sort of executive summary of its oddities.

The Gist

(1) The flashy temperature rises (the dashed lines) at the end are conjectures based on models that have repeatedly been proven wrong — indeed, they’ve never been proven right — by predicting temperatures much warmer than today’s. There is ample reason to distrust these predictions.

(2) Look closely at the period between 9000 BC until roughly 1000 AD, an era of some 10,000 years which had, if xkcd’s graph is true, temperatures much warmer than we had until the Internet. And this was long before the first barrel of oil was ever turned into gasoline and burned in life-saving internal combustion engines.

(3) There was no reason to start the graph at 20,000 BC. If xkcd had taken the timeline back further, he would have had to have drawn temperatures several degrees warmer than today’s, temperatures which outstrip the threatened warming promised by faulty climate models. And don’t forget that warmer temperatures are always associated with lush and bountiful periods in earth’s history. It’s ice and cold that kill.

(4) The picture xkcd presents is lacking any indication of uncertainty, which is the major flaw. We should not be looking at lines, which imply perfect certainty, but blurry swaths that indicate uncertainty. Too many people are too certain of too many things, meaning the debate is far from “settled.”

Unknown Unknowns

The temperature at 20,000 BC was, Munroe claims — no doubt relying on expert sources — about 4.3 C colder than the ad hoc average of temperatures from 1961-1990.

Was it actually 4.3 C cooler? How do we know? Forget the departure from the ad hoc average, which is a distraction. How do we know what the temperature was all those years ago? After all, there were no thermometers.

The answer is — get a pen and write this down, it’s crucial — we don’t know.

I’ll repeat that, because it’s a crucial point. We do not know what the temperature was. Yet here is xkcd, and climatologists, saying they do know. What gives?

Statistical modeling is what gives. This is complicated, so follow me closely. That it is complicated is why so many mistakes are made. It is tough to keep everything in mind at once, especially when one is in a hurry to get to an answer (I do not say the answer).

Temperature in history can’t be measured, but things associated with temperature can. For instance, coral grows at different rates at different temperatures. Certain chemical species dissolve in water at different rates at different temperatures. These other measures are called proxies. If we know the proxy, we can make a guess of the temperature.

Unlike temperature, proxies can often be measured historically, although never without error. That’s point (1): proxies are measured with error. After all, it’s hard to know exactly how much of a certain isotope of oxygen was present everywhere on Earth 22,000 years ago, yes? Don’t forget we’re talking about global average temperature. And then it’s not always clear that the historical dates are quite accurate either. Can you tell the difference between 22,000 and 21,999 years ago in an ancient chunk of coral? These are the two classes of measurement error. They have different effects, as we’ll see.

A statistical model between the proxy and the temperature, both measured in present day, is then built. Statistical models have internal guts, things called parameters. One or more of these parameters will relate how the change in the proxy is associated with the change in the uncertainty of the temperature. Not change of the temperature: change of the uncertainty in the temperature. Usually, and unfortunately, what happens is that these parameters are mistaken as the temperature (and not its uncertainty). That’s point (2): instead of reporting on parameters, about which we can be as certain as possible, what these models should but don’t report are predictions.

That is, the proxy measured at 20,000 BC is fed into the model and the parameter effect is reported. The uncertainty in this parameter effect, sometimes called a confidence interval, might also be reported. But these are always beside the point. Who cares about some dumb model? What’s needed is a prediction of what the temperature might have been given the assumed/measured value of the proxy. Not only the prediction, but also some indication of the uncertainty in the prediction should be given. (Even these predictions will be too sure because we can’t check the models’ validity in times historical; the reader here understands I am simplifying for a general audience.)

The website xkcd did not show any uncertainty. So he, and climatologists, make it appear that we know to a great degree just what the global average temperature was a long time ago. Which we don’t; not exactly.

The errors made thus far fool modelers into thinking they know much more than they do. The parametric confidence intervals tell us of the model guts and not about the temperature, and so using only these intervals guarantees over-certainty, and a lot of it. That we don’t incorporate the two kinds of proxy measurement error also guarantees even more over-certainty. I say guarantees: this is not a supposition, but a logical truth.

The combined effect of forgetting about the measurement error is to produce uncertainty bounds that are again too narrow (because of the first measurement error type), and they produce graphs which are way, way too smooth (because of the second).

Have you ever noticed how smooth and cocksure plots of historical temperature are, like xkcd’s? These errors are why. What we should actually see, instead of xkcd’s smooth, pretty line, is a vast wide blur, which is blurrier the farther back in time we go, and more focused the nearer to our time.

The bottom portion of the cartoon.

The bottom portion of the cartoon.

Now look at the end of xkcd’s plot, where more errors are found. Start around Anno Domini 1900. By that time, thermometers are on the scene, meaning that new kinds of models to form global averages are being used. These also require uncertainty bounds, which aren’t shown. Anyway, xkcd, like climatologists, stitches all these disparate data sources and models together as if the series is homogeneous through time, which it isn’t.

Here’s point (3): Because we can measure temperature in known years now (and not then), and we need not rely on proxies, the recent line looks sharper and thus tends to appear to bounce around more. It still requires fuzz, some idea of uncertainty, which isn’t present, but this fuzz is much less than for times historical.

The effect is like looking at foot tracks on a beach. Close by, the steps appear to be wandering vividly this way or that, but if you peer at them into the distance they appear to straighten into a line. Yet if you were to go to the distant spot, you’d notice the path was just as jagged. Call our misperceptions of time series on which xkcd relies for his joke statistical foreshortening. This is an enormous and almost always unrecognized problem in judging uncertainty.

There is one error left. From Anno Domini 2016 to 2100 xkcd shows dashed lines which are claimed to be temperatures. They aren’t, of course; at least not directly. They are guesses from climate models. These too should have uncertainty attached, but don’t.

The type of uncertainty xkcd should have put is again not “parametric” but predictive. We could rely on what the models themselves are telling us to get the parametric, or we could rely on the actual performance of models to get the predictive. Only the actual performance counts.

I’ll repeat that, too. Only actual model performance counts. That’s how science is supposed to work. We trust only those models that work.

Can we say anything about actual performance? Yes, we can. We can say with certainty that the models stink. We can say that models have over a period of many years predicted temperatures greater than we have actually seen. We can say that the discrepancy between the models’ predictions and reality is growing wider. This implies the uncertainty bounds on xkcd’s dashed lines should be healthy and wide. This is why xkcd’s, and the climatologists’, “current path” is not too concerning. There is no reason to place much warrant in the models’ predictions.

If I could draw a stick figure, I would do so here with the character saying, “Buy my book to learn more.”

Print Friendly
Comments ()
The Stream encourages comments, whether in agreement with the article or not. However, comments that violate our commenting rules or terms of use will be removed. Any commenter who repeatedly violates these rules and terms of use will be blocked from commenting. Comments on The Stream are hosted by Disqus, with logins available through Disqus, Facebook, Twitter or G+ accounts. You must log in to comment. Please flag any comments you see breaking the rules. More detail is available here.
  • Alice Cheshire

    Proxies, as far as I know, were never thermometers. They were general estimates about the climates of the past. Only global warming turned a generality into a flat line temperature until the starting point for the measurable rise of CO2 in the atmosphere. Quite a magician’s/psychic’s feat.

    • CB

      “Proxies, as far as I know, were never thermometers.”

      That’s absolutely correct, Alice.

      …but that doesn’t mean they are worthless.

      “Climate Milestone: Earth’s CO₂ Level Passes 400 ppm. Greenhouse gas highest since the Pliocene, when sea levels were higher and the Earth was warmer.”

      (National Geographic, Robert Kunzig, “Climate Milestone: Earth’s CO2 Level Passes 400 ppm”, May 12, 2013)

    • NiCuCo

      Satellites, on the other hand, measure temperature with thermometers dropped on long strings.

  • PRAFTD

    People who know nothing about Geology and Climatology shouldn’t speak on the subject. A philosopher masquerading as “scientist” is laughable.

    Come back when you have your PhD.

    • BuzEaston

      Instead of committing the ad hominem fallacy, show where the author is in error. I don’t know what you are, but your masquerading as a logical person is laughable. Come back when you learn logic.

    • Wayne Cook

      He does, bonehead. From your statement, it’s easy to perform the forensics and quickly realize you don’t know , so you attack him generally. Mocking him helps aleviate you personal anxiety. Outside of that, you’re just pissed. And pissed doesn’t need a Phd to be true. Not even a laudable attempt, mate.

    • Spetzer86

      The really funny thing about your comment is that’s exactly how statisticians see climate scientists. Global warming hinges on statistical errors made by “scientists” who should’ve asked a competent statistician for help before publishing.

      • ^THIS.

        I would have used strike-through rather than scare-quotes on the word ‘scientists’, though; there is as much ‘science’ in ‘climate science‘ as there is in ‘Scien’tology or the Church of Jesus Christ, ‘Scientist’.

        Give any competent numerical modeller a climate model, and they could produce literally any path with it, and call that path a ‘forecast’.

        Until there are bounds around a forecast that represent a series of ‘deviation’ forecasts where the parameters and exogenous variables are jointly perturbed by random σ-sized deviations from their base values, I ain’t buyin’ any forecast.

        Any forecast from a climate model that did that ‘full’ sensitivity analysis, would have forecasts that went from +100° to -100° – and the entire charade would be exposed for the gravy-train charlatanry that it is.

    • Dean Bruckner

      What are your credentials? I use my real name and have a Ph.D. in electrical engineering, which includes statistics and digital signal processing, as well as time series analysis. I teach engineering design and applied statistics.

      You?

    • Dee Bee

      PRAFTD –PRActiacally A F TurD?

  • I am critical of Briggs’ criticisms of this cartoon. Briggs uses language like “plot fails” and “error” in describing the cartoon, but it is not in fact erroneous nor a failure. The cartoon is rather presented in a way that that differs from Briggs’ personal, subjective opinion. I don’t wish to spend a lot of time on this internet comment, but to reply to “The Gist”:
    1. Forecasts cannot be “proven right” in a rigorous, mathematical sense for such systems.
    2. There have been many periods during the past couple of millions of years that have been warmer than today, according to our best methods of assessing global temperatures.
    3. There’s no reason to start the graph anywhere in time. But, in Munroe’s view, around the last glacial maximum would be a good point given Munroe’s message of the cartoon.
    4. Uncertainty is important, but it does not and, I’d argue, cannot change the interpretation of these estimates. That is, point estimates (the line in the time series) give information that allows observers of the cartoon to gauge the relative change in the proximate indicators of temperature. It would be ignorant of someone to claim that the last glacial maximum during late Pleistocene was not drastically cooler than today, that global temperatures have relatively gradually increased since then, and that the past 200 years we’ve seen a drastic increase in average global temperatures.

    And as an aside, I hope that the term “proxies” is taken with a little more seriousness than I interpret Briggs and the commenters take. Proxies in the past 20K years or so are quite sound and more diverse than most people are aware. The geologic and paleoecologic data are multifarious which allows climatologists, geologists, and paleoecologists to use strong inference about their findings. This qualitative and powerful collections of methodologies, although not always included in climatological models, do paint an abundantly clear and accurate picture of the recent past.

    Thanks for reading that 🙂

    • Dee Bee

      Nonsense on point 4. Showing the , presumably most likely value means nothing, unless the variance is very narrow. Otherwise it is misleading. And your last para after 4 is double nonsense. First three points seem ok

    • Mark Goldfinch

      Nonsense also, on point 1: The models have been shown to be wrong on every occasion. Even now, the IPCC can’t use a single model that predicts anything correctly. They use an ensemble, which is to say a mishmash of models which are individually all so wrong they’re unusable. By averaging these models together the hope is that something predictive will come out – a vain hope given the evidence.

      A second point on the models; journalist activists constantly point to the latest model and claim ‘new evidence’. A model is not evidence, put a computerised theory. The modelers have tailored their models to show exactly what they want. If they didn’t, they would have kept going. So, just because a model claims a large (fatal?) amount of warming in our future doesn’t mean it will actually happen. Just that a modeler thought it should.

    • “Forecasts cannot be “proven right” in a rigorous, mathematical sense for such systems.”

      Yes they can. There are two very obvious ways, and both have more rigour than you could poke a stick at. If you don’t know what those two ways are, then you should absent yourself from discussions about models because you will just parrot some “innumerate grad-student’s phrase book” guff about model limitations… and anyone who knows anything about numerical modelling will know you’re talking rubbish.

      Firstly (and most easily): if performed probabilistically (as all good forecasts should be), a forecast will result in a manifold rather than a line (a set of point estimates at eash point in the future).

      The forecast is ‘correct’ to the extent that subsequent outcomes for the variables of interest, lie within a small-epsilon hypercube of the central tendency of the manifold.

      Think of a model with just 2 equations and 2 time periods (today and tomorrow) where a stochastic simulation generates many millions of sample forecast paths, each with some assigned probability; tomorrow, you get to measure the actual outcomes, and you can compare them to the bivariate probability distribution created by the simulation.

      That’s the ‘easy’ way, and I’m not really a fan of it because it doesn’t properly penalise bad models (models whose distributions are wider than should be expected for the variable of interest).

      However a far far better testing methodology exists: it’s called historical validation. It doesn’t just test whether a model can forecast ‘accurately’ – it tests whether the model can forecast a known segment of the past with any precision.

      A historical validation takes place after the model is built and estimated. It uses known values for all variables in the model: endogenous, exogenous, parameters and estimation residuals.

      (Some background for those playing at home: an ‘endogenous’ variable is a model’s output; an ‘exogenous’ variable is a model’s input. A set of N equations can only solve for N variables: if you have M variables in total, M-N must be ‘given’ to the model. The partition of the M variables into N endovars and M-N exovars is called a ‘closure’).

      So onwards….

      You build a model, and you estimate the parameters in the ‘structural’ equations (‘structural’ equations in a system are the ones that contain per-equation error terms: by contrast, there are also ‘identities’ that must be true at all times – ‘nothing to estimate here’). That uses an ‘estimation’ closure, which is one where the ‘naturally’ endogenous variables are set as exogenous and fixed at their actual values, and the parameters and residuals are made endogenous and allowed to ‘flex’ so that some loss function is minimised (or a likelihood function is maximised… same same).

      That’s after the practitioner has made sure that all equations have the same order of integration on both sides (if not, your model is wrong), and that the estimation technique is appropriate (e.g., if the Gauss Markov conditions hold, you can use OLS; if not, you have to do other stuff… and if it’s a system of equations, it should be estimated with a systems estimator like NL3SLS).

      The estimation process gives you a vector of parameters – each represents an estimate of the ‘sensitivity’ of the endogenous (left-hand-side) variables with respect to the endogenous (right-hand-side) variables.

      It also gives you a matrix of residuals (one per structural equation per time period).

      Residuals and parameters are ‘naturally exogenous’ in a forecast (and also in a validation – which is just a type of forecast).

      To perform a historical validation you use the ‘forecast’ (or ‘natural’) closure – ‘dependent’ variables endogenous; regressors, parameters and residuals exogenous.

      Then, you
      (1) set the exogenous variables to their actual historical levels;
      (2) set the parameters to their estimated levels; and
      (3) set the residuals to zero;
      (4) solve the model.

      This gives you what the model ‘would have told you’ about the endogenous variables if your forecasts for the exogenous variables were exactly right, and your residuals were set to their expected values (zero in all ‘future’ periods).

      (Note: in a one-equation static model, this will just be the same as calculating ‘y-hat’ – but in simultaneous-equations systems, and in systems with temporal linkages across equations, it most definitely is not the same as ‘y-hat’ on an equation-by-equation basis).

      I’ve done this hundreds – if not thousands – of times (with large-scale macroeconometric and computable-general-equilibrium models) and let’s be clear: all models do a TERRIBLE job at historical validation. Most practitioners don’t even bother, because most of the time the model will just fail to solve.

      That is to say, given the model structure and “known-knowns” for all values of all data, the most advanced models will fail to ‘predict’ the actual values of the ‘dependent’ variables.

      Forecast uncertainty is a very significant limitation on models (which is why in my PhD thesis I spent an entire section on the reasons why all forecasts need a probabilistic aspect – TL;DR: non-bijectivity from the exogenous space to the important (reported) subset of the endogenous variables… so forecasts of the mean of the exovars will not yield the mean of the endovars). People ignore that the model must be ‘fed’ the forecast values of the exogenous variables, and that those are uncertain: stochastic simulation includes some (small) steps to ameliorate this problem, but it’s seldom done.

      • libertyPlease

        lol…Krato, I love when you reply like this. How many people, do you estimate, are alive that can follow the details of what you posted… without opening 10 textbooks? 100?

    • Flame Boar

      Christopher,
      Briggs is correct. “Error” is the correct term to use. Error is part of all measurements whether they are proxy or instrument based. For example, if you are measuring tree rings, you must visually select a point where one ring ends and another begins. You must do this for hundreds and sometimes thousands of rings. Your ability to accurately chose the precise point has an error. This error is finite and relatively large with proxies.

      If you are using instruments, there is also error in reading them. The mercury thermometer was used through most of historic temperature records. What point of the mercury/glass meniscus do you choose as the temperature? The bottom, the middle or the top? How close can you estimate the point you chose of the meniscus to align with the temperature gradient? Your angle of view will change your temperature reading. How accurately can you measure the difference between any two lines on the thermometer? These are three errors which are intrinsic to mercury thermometer temperature records. When we are talking about changes per year of less than 0.01 C, these errors are large.

      It should be noted that errors are additive. Furthermore, taking lots of data does not reduce the error. As Briggs indicates in his article, showing data without showing the measurement error is, in the best case, misleading.

      • You’re right about the need for error bars (or at least some acknowledgement of uncertainty in historical data), but it’s not solely ‘error’ in measurement and observation.

        Tree ring densities, ice core gas-bubble compositions and other ‘proxy’ measurements used to construct the long-term average temperature, are instrumental variables for temperature – that is, there is some statistical model that links tree ring density and annual average temperature, of the form

        T = f(D; θ)

        where

        T is annual average temperature;
        D is observed tree ring density; and
        f() is some function that has a vector of parameters θ .

        (I have deliberately left off time subscripts – that is, I am allowing that temperature at time ‘n‘ can affect tree ring density at more than 1 time period after ‘n‘ – it does not make sense for it to be able to affect densities before ‘n‘, obviously).

        That statistical model has embedded uncertainty, because it is not the case that a tree ring density of d always leads exactly to temperature t equal to some error-free value f(d). Rather, there will be an estimated relationship

        T = f(D; θ) + u

        where u is an error term.

        Minimising the sum of the squares of the u gives the ‘least squares’ estimates of the parameters θ (which are unknown); the individual u will not be zero, and so the ‘estimated’ value of T (called ‘T-hat’, usually) is equal to T-u.

        Furthermore, the variability of u and the variance of θ are linked – so the stochastic properties of forecasts of T (of which error bars are based on the first moment) can be calculated; what anyone who knows how to do this can prove to themselves is that the errors widen like an ear-trumpet, the further you go into the future.

        This is actually in addition to the absolutely-valid problems that Briggs identifies in his discussion of the ‘smoothed’ data, and the illegitimacy of smoothing past proxy-based estimates and sticky-taping them to modern thermometer measurements.

        I’ve said this before: any competent numerical modeller could get a climate model to yield any results that were desired, by changing parameter levels – without changing any single parameter outside of its plausible (statistically-defensible) range.

        Back in the 90s as a pedagogic exercise for some students I was teaching, I built a simple model that demonstrated this type of model-manipulability; it could generate anything from a ‘hockey stick’ to an upside-down ‘hockey-stick’ (its default output was a ‘pure’ sine wave), by changing sets of parameters… but without ever changing any single parameter by more than 1 s.d.

        • Flame Boar

          Your comments about the adjustments to climate models providing any result that you would like be tweaking the parameters is a point that people on both sides of the CAGW issue should understand. This is, in fact, how the modelers tweak their programs to hindcast with minimal deviation from historical data. However, once adjusted to hind cast, the same parametric settings will not give their model skill to match empirical data going forward.

  • RoHa

    “temperatures much warmer than we had until the Internet.”

    So it is the Internet which is causing global warming? Or am I committing a post-hoc-ergo-propter-hoc error?

  • An issue not frequently discussed is how accurate the proxies are. In at least one case the proxies in question are tree ring data. Alas, this data is more indicative of tree growth than of temperature. The assumption is that growth is driven by warmer temperatures, as opposed to (say) more sunlight and a better supply of nutrients.

  • antoinepgrew

    What I enjoy about your writing is your clarity. You can make complex statistical matters clear. So why is the book you link to 64 bucks?

  • lee Phillips

    Point (2) in the “Gist” used to say that the xkcd graph claimed that the “period between 9000 BC until roughly 1000 AD” had “temperatures much warmer than we have now”. But now the original version of the article has been silently changed. Why?

    The current version of (2) makes no syntactic sense to me at all.

  • Ryder

    Mr. Briggs… can you please do a graph identical in style to the XKCD, but instead shows the “fuzzy band” as you understand it… in other words, answer his cartoon, with something that better reflects uncertainty?

    A visual response is the one most people really need to see.

  • Sabretruthtiger

    Why are you not addressing the massive, glaring error? The 2000-2016 period is completely wrong! It shows a steep incline when it was THE PAUSE!

    • Bart_R

      The reason xkcd shows a steep incline for the 2000-2016 period is because the data shows a steep incline on the 2000-2016 period on climate timescales.

      “THE PAUSE!” is a myth on climate timescales, and is only visible when you cherry pick timescales too short to reflect the overall climate, unless you also use numerical methods that show both climate and weather.

      www DOT woodfortrees DOT org/plot/gistemp/from:1880/every:12/plot/gistemp/from:1880.083/every:12/plot/gistemp/from:1880.166/every:12/plot/gistemp/from:1880.25/every:12/plot/gistemp/from:1880.333/every:12/plot/gistemp/from:1880.416/every:12/plot/gistemp/from:1880.5/every:12/plot/gistemp/from:1880.583/every:12/plot/gistemp/from:1880.666/every:12/plot/gistemp/from:1880.75/every:12/plot/gistemp/from:1880.83/every:12/plot/gistemp/from:1880.916/every:12/plot/esrl-co2/normalise/offset:0.4

  • I was shocked to find a copy of Uncertainty: The Soul of Modeling, Probability & Statistics in a roadside trash basket, so I took it home and read it and discovered why it had been dumped.

  • Bart_R

    Pretending passing familiarity enough with Uncertainty to really not need to wade through yet another text on the matter, let’s have a look at Randall Munroe’s supposed ‘mistakes’ in interpreting the spurious alleged ‘climatological community’.

    ..it isn’t xkcd’s fault..

    Randall’s a grown up. He signed his name to the comic. It’s neither his first on the topic, nor is it out of line with his public statements outside of the comic medium. Blaming some imagined ‘climatological community’ is simple tribalist propaganda. Since Randall Munroe is more than adequately competent to understand Uncertainty and time series operations, as an accomplished professional with substantive credentials, one suggests William M. Briggs owes Randall Munroe an apology for his patronizing attitude toward a colleague. Munroe most certainly knows what he’s talking about.

    (1) The flashy temperature rises (the dashed lines) at the end are conjectures based on models that have repeatedly been proven wrong — indeed, they’ve never been proven right — by predicting temperatures much warmer than today’s. There is ample reason to distrust these predictions.

    Who proved these said models wrong, exactly?

    Which models precisely are you saying were they again?

    Certainly, if you’re bold enough to reference your own (much too expensive) book, you’d be kind enough to reference the peer-reviewed literature on the specific Navier-Stokes GCMs you claim have been proven wrong. I don’t say this idly; Navier-Stokes equations are at the heart of much of modern engineering; if these equations have been disproven, then it’s news to the engineering world, and has serious implications for technology and industrial safety thereof worldwide. Or is it the process of computer simulation you claim is wrong? Again, computer simulation is a major industry in its own right, and if you’ve proven computer simulation is broken, you ought reveal that to the world.

    The dashed lines at the end of the Timeline are projections of Physics, as well-established and with as much evidence as black holes, extrasolar planets, the Higgs boson, dark energy, dark matter, and general relativity combined. Science holds exact relations inferred from all observations given simplest possible baseline assumptions and exceptions (but no simpler) and most general application, until new observation require amended or new relations. That’s Newton’s Regulae Philosophandi, as amended by Einstein.

    Given a choice of siding with Newton, Einstein, tens of thousands of professional climatologists and Randall Munroe, or William M. Briggs’ idiomatic views on Uncertainty?

    That’s no contest, I’m sure you must agree. So this first ‘point’ is simply invalid, a false framing of how science works, built on hearsay and infamy, not fact.

    (2) Look closely at the period between 9000 BC until roughly 1000 AD, an era of some 10,000 years which had, if xkcd’s graph is true, temperatures much warmer than we had until the Internet. And this was long before the first barrel of oil was ever turned into gasoline and burned in life-saving internal combustion engines.

    Huh. “Your honor, from 9000 BC to 1000 AD, people often died. And this was long before the first gun was invented. Therefore you must acquit my life-saving client on the charge of murder, despite the smoking gun in his hand, the fingerprints on the shell casings, the ballistics match of his weapon to the bullets recovered from the body of the victim, and the eyewitness testimony.”

    By Argumentum ad Absurdum we clearly dismiss this second invalid point.

    Milankovitch cycles clearly account for the Holocene Optimum, and just as clearly do not portend the current warming that only GHG’s from fossil waste dumping explain.

    (3) There was no reason to start the graph at 20,000 BC. If xkcd had taken the timeline back further, he would have had to have drawn temperatures several degrees warmer than today’s, temperatures which outstrip the threatened warming promised by faulty climate models. And don’t forget that warmer temperatures are always associated with lush and bountiful periods in earth’s history. It’s ice and cold that kill.

    Well, setting aside the reason that the graph is of the Holocene context, and that taking the Timeline back arbitrarily far would simply make Randall Munroe’s case stronger, at the expense of being a much, much longer drawing, one that on the same scale would measure two million times longer of little additional merit, what’s the beef?

    How is it a mistake to start a Holocene Timeline just before the start of the Holocene for context?

    Randall Munroe clearly shows the reason for focus on this 20,000 years of stability is important: it’s when all of human civilization arose, and clearly the connection between climate stability and human ability to advance civilization is proven by Munroe’s Timeline argument. How did you miss that point?

    And why forget that hot alien conditions in our world favor alien monsters incompatible with human safety: giant poisonous insects, carnivorous megafauna, and inedible-to-human flora? None of which is really relevant unless humanity survives the scouring of the world by the extreme weather of a climate changing 15 to 200 times faster than is natural.

    (4) The picture xkcd presents is lacking any indication of uncertainty, which is the major flaw. We should not be looking at lines, which imply perfect certainty, but blurry swaths that indicate uncertainty. Too many people are too certain of too many things, meaning the debate is far from “settled.”

    READ HARDER.

    Randall Munroe clearly discusses uncertainty, and how his lines do not imply perfect certainty. How did you miss it?

    If you really need a comic explained to you, there’s a service for that: www DOT explainxkcd DOT com/wiki/index.php/1732:_Earth_Temperature_Timeline

    Look between 16000 BCE and 15500 BCE, under “Limits of This Data”. There’s blurred lines and everything.

    Blurring is not the only way to depict uncertainty, and is often inadequate or misleading, as any professional trendologist ought know. Randall Munroe certainly knows this.

    And again, Regulae Philosophandi, the proven approach of professional scientists for three centuries, tells us that we ought move forward at every point as if certain of the exact relations determined from all evidence, until new evidence require amended or new relations. Paralysis by analysis is the path of ruin.

    • Matthew_Bailey

      I’m glad you posted this so that I did not have to.

      • Bart_R

        While I thank you for the vote of support, generally I encourage people who agree with me to post anyway.

        I also encourage people who disagree with me to post, too.

  • Matthew_Bailey

    I was going to write out this big long post on the author’s own “failings”in his criticism of Randall Munroe’s/XKCD’s Global Warming post.

    But I see that others have done so in much fuller detail than I was going to go into.

    But one point many of them missed.

    Even if we take the graph back millions of years prior to the Holocene, to when “warmer than current temperatures existed” this IGNORES THE SPEED with which the global temperatures changed during those periods.

    And we have two specific instances where the temperature change was commensurate with the current.

    And both mark Global Extinction events (The K-Pg and Permian-Triassic) where Global Temperatures were affected by Asteroid/Comet Impacts.

    Which brings us to our current situation.

    Evolution cannot adapt to Temperature Changes that happen too quickly.

    It takes thousands to hundreds of thousands of years for climatic adaptations to occur without severe disruption to ecosystems.

    Or in cases where the Temperature change is not severely dramatic, over hundreds of years we can see species geographic migration to small degrees that allow for continued ecosystem viability.

    Our current temperature changes are occurring over decades.

    And nothing but single-celled organisms (whether Eukaryotes or Prokaryotes), and possibly (POSSIBLY) simple Eukaryotic multi-celled life, can adapt evolutionarily at those speeds.

Inspiration
I Wasn’t the Best Choice for a Husband
Mark Davis Pickup
More from The Stream
Connect with Us