Skip to content

A Thought Experiment on Taxes

18 April 2011

A lot of people think economics an inpenetrable fog, but it needn’t be that way.  A lot of elementary economics is common sense.  And one of the best ways qb knows to get common-sense ideas on the table is to perform “thought experiments.”  In a thought experiment, we ask a series of questions about a hypothetical situation, and we build our understanding incrementally; eventually, the general contours of the phenomenon we seek to describe become evident.

So let’s do that with a simple taxing scheme, and ask the question:  how much revenue does a taxing entity receive at a given tax rate?  And what is the relationship between the tax rate on any given individual and the amount of tax revenue received from that individual?  This is a great set of questions for a thought experiment.

Let’s begin with the most trivial, easily understood idea:  how much revenue is generated at a tax rate of 0%?  It should be obvious that the answer here is ALWAYS zero.  If no tax is levied, there is no revenue.  This results in the following chart, thus far, with the blue marker at (0%,$0):

Next, and only slightly more challenging, is this question:  how much revenue is generated at a tax rate of 100%, in which the taxing entity confiscates all of the individual’s income?  Here, we have to invoke a key assumption, to wit, that the individual in question is both (a) free and (b) self-interested.  The individual must not be utterly enslaved to the taxing entity in order for us to establish common ground with our situation in the U. S.  And the individual must have his/her own self-interest at heart, although that self-interest may include the interests of his/her dependents, as would be implied by listening to the Pastoral epistles:  taking care of one’s own family is a moral requirement, whether the breadwinner be the wife or the husband.  Let us be generous, then, and posit that our individual is the breadwinning woman and that her husband and children are dependent on her income for food, shelter, and clothing.

How much revenue will she produce if all of her income is confiscated by the taxing entity?  Again, the answer is zero; no matter how hard she works, she does not get to take anything home, so there is no incentive for her to engage in productive work.  She might as well stay home, help take care of the dwelling, and scavenge for resources, an activity which (we may presume) is beyond the reach of the taxing entity.  In any case, she will not work for pay lest her time and effort be wasted and lest the taxing entity confiscate all she earns.  So we can add one more piont to the chart, thusly, with the new green marker at (100%,$0):

We now have established the two endpionts of the relationship we seek.

At this piont, we need to think a little bit.  Both of our first two scenarios established that the tax revenue generated is zero.  It’s fair to ask, then:  is there any tax rate at which the tax revenue generated by the individual for the taxing entity is NOT zero?  Of course.  If the tax rate is 1%, the individual gets to keep 99% of what she earns and so has an incentive to earn an income; and the tax entity receives 1% of what she earns.  That revenue figure is small, of course, but the important thing is that the tax revenue is greater than zero.  Likewise with 2%, and so forth.  In fact, we can assume that all of the tax rates between 0% and 100%, not including the endpionts, generate a value for the tax revenue that is greater than zero dollars.

We now have to make a further assumption, to wit, that the series of pionts we place on the chart represents a continuous and reasonably smooth curve.  That is, the tax revenue generated at a tax rate of X% is not terribly different from the revenue generated at a tax rate of [X+1]% (or, equivalently, [X-1]%).  Neighboring values of the tax revenue are not equal, but they’re not very far apart, either.

Let’s summarize what we know.  The curve that represents the relationship between tax rate (X, or horizontal axis) and the tax revenue generated (Y, or vertical axis) is a SMOOTH, SLOWLY VARYING function Y(X), and its Y values are everywhere >$0 except at the endpionts X=0% and X=100%, at which pionts Y=$0.

What is the shape of a curve like that, which we denote as Y(X)?

At this piont, we need to invoke Sir William of Ockham, who urges us to select the simplest of all plausible answers, to avoid adding unnecessarily to the “complicatedness” of our answers.  That means, in our case, that the general shape of the function Y(X) looks something like this:

Now it may appear that qb has stacked the deck here and has insisted that the maximum revenue occurs at X=50%.  But qb has no idea whether or not that’s true.  We simply haven’t generated enough data pionts to say for sure.  Still, granting our assumptions, the question concerned the GENERAL shape of the curve, not the PRECISE shape.  And the simplest smooth, slowly varying function Y(X) that passes through the pionts (0%,$0) and (100%,$0) and elsewhere takes on Y-values greater than zero must have the general shape of that last chart.

At this piont, we can draw two simple but powerful conclusions, which are unarguably true (again, granting our assumptions)…or, if you prefer, which are strictly required by the intermediate conclusions we’ve reached thus far:

1.  There is at least one value of the tax rate, X=X(loc-max), which generates a local maximum in the amount of revenue generated, Y(loc-max).  By “local maximum” we mean that if we move to the left or to the right along the curve from X(loc-max), the resulting value of Y will be less than Y(loc-max).  Try it out!  In the chart above, go to X=50%, put your finger on the yellow circle that corresponds to it, then move your finger along the curve either to the right or the left of that piont, and you will see that the value of Y decreases either way.

In the general case, there may be more than one of these local maxima, but if we stay with Ockham’s Razor (the “principle of parsimony”), there’s only one…which means we can call it the “global maximum.”  So we can treat the chart immediately above as if the curve were a piece of spring steel, and push on it from the right or left to move the X-location of the global maximum, but we’ll still have one maximum, and it will be somewhere between X=0% and X=100% (not including those endpionts, as we said).  Again, we don’t have enough information to say that X(max-loc)=50%; all we know is that there is a value X(max-loc), and it lies somewhere between 0% and 100%.

2.  The more powerful observation, which follows from conclusion #1, is this:  There is at least one non-trivial region of the curve in which lowering the tax rate (i. e., moving to the left along the curve) generates MORE revenue than was generated at the starting piont.  The easiest way to see this is to start at the piont X=100%, and then move to the left and rejoin the curve.  Whether the value of X(loc-max) is at 50% or 75% or even 90%, we must conclude that the revenue generated at X=99% is greater than the revenue generated at X=100%.

Lots of our fellow citizens – VOTERS – cannot see this.  They assume that decreasing tax rates ALWAYS reduces tax revenue, which means they think the world works like the red circles in the following chart:

The red chart, however, denies the validity of the assumption we made about the self-interested behavior of the breadwinning mom in our thought experiment.  People who ascribe to the red chart don’t actually behave that way; every single one of us, with only the rarest of exceptions, would continue to work productively if all of one’s earnings were confiscated at the muzzle of the IRS’ gun.

It is true that reducing tax revenue does occur in SOME regions of the chart.  But not all!  We have just established, by a simple thought experiment, that at some values of X, lowering the tax rate would actually increase revenue.  And that, dear friends, is like presenting liberals with kryptonite.  The only real question for us to answer is this one:  which side of X(loc-max) are we really on?

Fair question.  In a subsequent post, we’ll see how in the case of the Reagan tax cuts of the early ’80s and the Bush tax cuts of the early 2000s, revenue to the Treasury actually increased.  That suggests strongly that we have been overtaxed…and that if BHO and the House Progressive Caucus carry the day, revenue to the Treasury will decrease, not increase, and our deficit will spiral upward still more.

By the way, what we have just done with our thought experiment has a name:  the “Laffer Curve,” named after the famous economist Art Laffer.  And you thought economics was an inpenetrable fog!


Relieving PreacherMike of a Hosting Burden

29 March 2011

I’ve transposed a discussion from Mike Cope’s blog to here as a courtesy to Mike, who has been very patient with us.  If you want to read the background for this post, click here.  What follows is a reply to Q and JTB.


Q, I’ll try to unpack a little more.  But I want to stay in the thought-experiment or conceptual domain for a little while longer to get some things nailed down before proceeding, in the interest of avoiding a barren argument.

I should say right away that this tete-a-tete has already helped me clarify my own thinking, and I see at least a few of the exposed vitals:  things that I supposed were self-evident are in question.  So that’s a salutary effect already.


Some thoughts, off the cuff:

Biologists often enjoy the luxury of dealing with discrete variables, variables that have either THIS value or THAT value (or perhaps THAT value over there).  In the case of what I am calling material sexuality, the options tend to be either XX or XY, with the occasional excursions to the triploid permutations of X and Y; maybe there are other possibilities of which qb is not aware.  In any case, the possibilities are constrained by discrete mathematics, and the difference between rejecting and not rejecting the null hypothesis in the discrete domain can often be boiled down to a statement like, “either THIS or THAT, but not somewhere in between.”  That is a very convenient domain in which to define essences; we can call an attribute essential if there is precisely zero chance of finding its alternative in a population of beings, and any attribute whose probability of occurring is nonzero is therefore deemed non-essential.  (I am using “essential” in a pretty strict sense, I think, that coincides with the way JTB has been using it: not the moral sense of “must have it,” but in the more neutral sense of “doesn’t make sense without it.”)

If we focus our attention, now, on the question of certain organs, which has been a rhetorical tactic here (and an effective one, at that), pretty clearly we can establish that having a [organ] is essential to being a [sex], and if that [organ] is replaced by its alternative, the value of [sex] toggles accordingly.  It is in the nature of some biological variables to yield to this analytical framework.  And any attribute variable [X] that, when toggled, does not toggle the classification variable [a] is deemed a non-essential attribute to class [a].  The underlying science may be unbelievably difficult, but the framework for defining essentiality is pretty simple; either it is, or it is not, and one occurrence of “not” is enough to force us to abandon essentiality.

In my work, I seldom if ever have the luxury of working with discrete variables like that.  More often, I am working with continua, or at least with discrete variables that have so many possible values they might as well be continua.  (Anywhere there’s an analog-to-digital converter, the latter is the case; with a dial thermometer, the former is the case.)  I take it that what Q and JTB have labeled “trends” are the statistically significant differences between central tendencies of these continua or quasi-continua…but with the caveat that no such variable can ever be thought of as essential, precisely because it involves a variance of some kind.  That is the idealized domain that I simply cannot grasp, and of course it means that, strictly speaking, wherever an attribute variable lies on a [quasi-]continuum, we cannot by definition speak of essentiality.

In the spiritual domain, which you will recall I have presupposed as a domain that (properly defined) is orthogonal to the material domain, our attribute variables are almost NEVER discrete.  That means, of course, that we have defined away any practical use of the concept of essentiality; we showed that in the previous paragraph.  I take that to be the conceptual unpacking of JTB’s critique of trans-anatomical “essence.”


“Spiritual” attributes:  what are they?  It is here that qb begins to appeal to scripture for help.  Given the rather privileged position I have assigned (!) to the spiritual domain – it is on hierarchical par with the material and interacts with it, but cannot be reduced to it – I look to ethical codes for some guidance, and further, I permit considerations of “holy” spirit to enter into the picture.  The language I find has to do with matters of character:  patience, loyalty, self-control, and the like.  And having adopted the further presupposition that we are created beings in some meaningful sense (that is, whether we emerged from the primordial soup over epochs upon epochs of evolutionary design or were cobbled together in a nanosecond from celestial dust, we are the purposeful project of a creator God), I naturally suppose that God’s purposes are served by granting us attributes, even an array of attributes, that are in the service of those purposes.  Thus, testosterone and progesterone in the material domain, and…well, what?  Character-like attributes in the spiritual?

But these spiritual attributes are, as we saw, variables that cannot be well quantified in discrete terms.  The whole notion of spiritual growth, as in Peter’s epistles, presupposes that the variables lie on a [quasi-]continuum.  So if we are forced to adopt a strictly discrete standard for essentiality, it clearly follows that none of these attributes can be deemed “essential.”

I also think it’s possible that God pursues his purposes by granting an arrangement of material attributes that roughly coincide with an arrangement of spiritual attributes such that they reinforce one another in matters that he deems critical, primarily those matters that contribute to what Dallas Willard and Richard Foster have framed as our irreducibly aesthetic _telos_: “an all-inclusive community of loving persons, with God at the heart of this community as its prime sustainer and most glorious inhabitant.”  We were created for good, and his purpose is to refashion us into the good.  In creating us, then, I suppose that God has stacked the odds in his own favor.  (No time or inclination to chase the rabbit into the brambles of theodicy.)

I do not know what to make of individuality in this regard except to note that, contra Richard Beck, an individual is at least in part definable in terms of his or her capacity to exert an independent will, no less than having possession of an independent body.  I take “will” to reside primarily in the spiritual domain, interacting with the material and capable of succumbing to it, but not reducible to it.  Perhaps it’s fairer to say that Dr. Beck, as a self-proclaimed “weak volitionist,” at least permits this much latitude into freedom, and I’m happy to work with whatever crumbs of moral liberty he allows to float off the table.

Returning now to the [quasi-]continua of the spiritual domain…it seems to me that if we recognize that not all variables of interest are discrete like those biological ones, we have to speak of essentiality in terms of approximations and thresholds.  Obviously, we are going to be arbitrary to some degree.  But it seems that the kinds of variables we’re dealing with simply defy any discrete tests that could be applied to, say, the presence or absence of a [organ].

By now, it should be clear that we are doomed either to (a) a semantic argument about what constitutes essentiality or (b) a trivial argument about presuppositions that we don’t share.  Or maybe not.  I certainly don’t say that in a desire to cut things off; but, like JTB and Q, it’s probably better not to waste time on pursuits that are highly likely to be barren.



Golf Update (for golf geeks only…seriously.)

28 March 2011

Back in November, Jenn bought me a fabulous set of classic, forged irons, custom fit for length, lie, grip size, and shaft flex.  They are Titleist MB 710 irons (3-9) with X100 Dynamic Gold steel shafts (stiff), and they replace a massive set of 20-yr-old Tommy Armour 845 Silver Scot cast irons (1-PW) with stiff graphite shafts.  I hit a very high ball with the 845s, and although I had never golfed enough to play very well, I had learned to shape the ball OK.  Still, my scores were almost always in the 90s, and I was nearly always good for at least two snowmen (or worse!) per 18 holes.

In July of 2010, however, we decided that the consolidation of Tascosa Country Club and La Paloma Golf Club, together with a master plan for a really family-oriented renovation at Tascosa and reasonable dues, we’d take the plunge.  Summertime sunshine gives us time for a full 18 after work if we want, and now we have 36 holes available to us.  So I’ve hit a lot of balls, both on the range and on the La Paloma track, and it has made a world of difference in my game.  So when November rolled around, it seemed reasonable to go with a custom-fitted set of irons.

(In case you haven’t heard, the transition from cast/cavity-back irons to forged/muscle-back irons is a circus, a comic tragedy, a devastatingly difficult transition.  What “they” say is true:  the forged iron is unforgiving and brutal on off-center hits.  And nearly everything is off center.)

It also made sense to use some of that Christmas money for the first golf lesson.  I figured, as long as the new irons are totally destroying my game, I might as well let someone who knows what he’s talking about deconstruct everything I’m doing and remake the swing from the ground up.  So I girded my loins for the humiliation and scheduled the session.

I should also piont out that along with the MB 710s, if I was going to go top-of-the-line I might as well go all the way and get an array of Vokey Spin Milled wedges in those gorgeous finishes.  I got a 48 degree PW in standard silver, a 52 degree in Oil Can, and a 56 degree in Black Chrome.  All of these clubs are 2010 models, so they feature the latest PGA-“legal” V-grooves with rounded groove edges, designed to reduce the amount of spin the Tour players are able to generate.  After about two months of noodling around with those three wedges, it is clear that the PGA has achieved its goal, at least with qb.  I love playing Balata-covered balls, and I had learned with my 845s and my two 855 wedges (SW and LW) how to check the ball up with just about everything shorter than a 6-iron, as long as I was playing Balata.  The harder urethane covers…not so much.  And so much the worse with the 2010 Vokey wedges, at least in the early going; everything around the green was chip-and-run.

You remember the prophecy of Joel?  Everything that the grasshopper left, the locust would eat, and so forth?  That’s the way it felt as Perry completely broke my swing apart with these dastardly but tantalizing blades.  As if the forged irons hadn’t humiliated me enough!  Perry turned my left hand clockwise to “strengthen” it, which reduced my range of motion; he moved my hands back from their earlier position well ahead of the ball; we closed the face of my club at address; he dropped my hands several inches so that the butt of the shaft was now pionting at my belt buckle; he reduced the length of my backswing; and he refused to let me take the club back inside, a device I had devised to ensure I could draw the ball (R-L) at any time.  And my beloved Taylor Made R9 Burner driver?  A total wreck.  I was miserable, hitting driver about 200 or less with an utterly unpredictable shape.

But I stayed with it, and I hit hundreds and hundreds of range balls from January to March this year, not spending a lot of time over each ball but just trying to hit as many as possible while obeying Perry’s instructions and trying to figure it all out.


We now have breakthrough.  Three straight days on the practice range in late March, plus a nine-hole excursion with Silas yesterday, and the light is going on big-time.  The swing is getting grooved.  The old distance has returned; it’s not unusual to hit 285 off the tee, and my irons are 3 (205-215), 4 (195-205), 5 (185-195), 6 (170-185), 7 (160-170), 8 (150-160), and 9 (135-145).  Tempo is slower.  Trajectories are starting to repeat themselves 3, 4, 5, even 6 times in a row on the range.  Sure, I have the odd series of fat swings that blast divot pieces into my neighbors’ faces, and I have the occasional thin, worm-burning fade/slice when I chicken out and don’t commit to the swing.  But the Eureka moment has arrived.  And sweet-spot contact is the rule now, not the exception.  Oh, how sweetly those blades sing when we make contact!  The ball just seems to rocket off the clubface.

What made the difference?  Well, no doubt Perry knew what he was talking about.  What he was doing was making my swing more mechanical, and therefore (as the logic goes) more repeatable.  But I also learned something about those damnable blades:  you hafta hit the ball with a descending blow and take about a 6-8″ long divot from in FRONT of the ball.  With those cast irons, I could shave grass and sweep the ball off the turf, and still get a solid strike on the ball.  Not with the muscle-backs!  So in order to achieve the descending blow, I had to move the ball back in my stance, toward my right foot…for every single iron.  The 3-iron placement is at the midpoint, and every subsequent iron is a tiny bit further back than the last one.  The left arm is rigid with a tight grip; the backswing of my hands is much shorter (and slower); and I now have to cock my wrists at the top of the backswing to get the shaft parallel to the ground and generate enough power.  I’m anxious to get another video lesson to see how things have changed since my first one.  How embarrassing!  I looked like Craig Stadler over the ball, a two-bit amateur on the takeaway, and a grossly distorted Jim Furyk pretzelmania to get the club back in the hitting zone.  It was UGLY.  It might still be ugly.  But the hard work (!) is paying off.


One last thing.  La Paloma’s golf shop had some clearance items this winter, including a bunch of brand-new, pre-2010 Vokey wedges with the older, square grooves.  I figured I’d buy one and see if I could get it to wrap those grooves around a ball and check it up.  So I bought a 60 degree, did some research on ball technology, and went to the practice green.  Aaaaaaahhhhhhhhhh….yessssssss!

So here’s the secret.  Do what it takes to learn to hit your irons with that steeper, descending attack angle, whether you use cast or forged irons.  If you have a mixture of Pro V1 and Pro V1x balls, get rid of the V1x balls and replace them with V1s – they’re softer, and they spin a LOT more.  I can finally let go of my Balata addiction, which was getting harder to fill and more expensive to underwrite; they’re not even manufactured any more.  And see if you can find one of those pre-2010 Vokey Spin Milled wedges with the square grooves.  At least one.  It WORKS!


Why We Oughta Be a Bit More Humble About What We “Know”

26 March 2011

Occasionally I go to a feedyard and measure dust concentrations, wind speed/direction, solar radiation, humidity, and a couple other things, all of which gives me a chance to estimate the rate at which the feedyard is emitting dust. I bring all those data back to the office, load ‘em into my computer, and run them through a model that relates what I measured – the dust concentrations and weather data – to the feedyard’s emission rate. Let’s say, for example, that on one particular day, this whole enterprise yields an emission rate of 100.

Is 100 “true?”

Well, probably not, not in any strict sense, anyway. Every single quantity I measured, in fact, was only approximately accurate, and even then, much of what I measured is based on some arbitrary standard that someone set somewhere. (For example, just this week we learned that the little cylinder of metal alloy that the National Institute of Standards and Technology keeps in an air-tight vessel somewhere in the bowels of the earth as a standard kilogram is actually losing mass over time.) My thermometer, my wind vane, my anemometer, and my dust monitors all measure those quantities in an approximate sense. Every temperature I measure is an estimate of the “actual” temperature, whatever that means.

These measurements are analogous to our biblical text. We “know” what “Paul” “said,” but we are less certain what “Paul” “meant.” Some of what is attributed to Paul appears to have been inserted by a later editor; but even if we’re not prepared to concede that piont, we can say with some assurance that we do not have access to what “Paul” actually wrote…after all, most of our raw material is dated (by scientific procedures whose methods are also subject to error) no earlier than the second or third century CE, a couple hundred years after “Paul” lived, according to “Luke.” The same is true of all of the biblical authors, to a lesser or greater degree. That does not mean, of course, that we cannot know anything; it just means that if we are wise, we will admit that our data are only estimates of the real quantities we’re interested in. Our certainty is not 100% in any case.

Back to feedyard dust. I now take my data and run them through my model, a mathematical model I’ve built to predict how a given emission rate at point A translates to a dust concentration at point B somewhere downwind. That model, itself, requires a great deal of sophisticated approximation at many levels, and it involves all of the quantities I “measured” in the first step. All those measurements were estimates, as we saw, which means they’ve all got some errors associated with them. But then, so does my model; the math I use to translate emission rate to downwind concentration is an approximation of the underlying reality, a reality so complex that I’ve had to make a series of enormously consequential assumptions just to come up with an equation simple enough that I can actually solve it. So I’m running a bunch of error-prone estimates through an error-prone model to generate the estimate I’m really interested in. Result: after I compute how all those many uncertainties add up and make their ways through my model, my “100″ is actually “100 plus or minus 84.”

That model I’m talking about is analogous to our individual world views, which we build from our perceptions of reality, refine with our capacity to reason, and use with our somewhat stunted ability to interpret what our model spits out. I, for example, do not have the range of experience that an individual woman has to process and understand events that (say) brought her female-ness to the surface as a primary piece of data in some local context. So my model for understanding a particular event at her church is inevitably going to yield a different conclusion than HER model would yield under otherwise similar conditions.

All of this, again, is not to say we cannot know anything. It just means we need to be attentive always to the uncertainties that lurk in every move we make. qb will always balance his checkbook on the basis that 2+2=4 and 4-2=2; the uncertainties there are almost imperceptible. On the other hand, qb will tread lightly on the insistence that women should cover their heads whenever they enter the “church building,” because there are a lot of moving parts in the machinery that yields such a doctrine, many of which are subject to individual uncertainties, and all of which work together in a model structure that itself is fraught with uncertainty, bias, all kinds of error.

Look, maybe we can boil it down to this: the more uncertainty there is in each of the data pionts we use, and the more moving parts there are in an argument that links those data pionts together in a narrative model, the humbler we ought to be about the conclusions we reach through that narrative. So when some so-and-so spouts off about how self-evident it is that “genders” have self-evident “roles” within the “New Testament church,” I…chuckle a bit.

Yes, I know about Genesis 1:26; and yes, I know about Paul’s epistles. I’ve studied and pondered all of it. It’s just that a lot of uncertainty lies behind each of those data pionts, whether we acknowledge it publicly or not. I’ve also backed up from the Scriptures and tried to consider what sort of model is implied by the macro-scale trajectories that we observe in the canon as a whole, and I can’t ignore those, either, as trends that might need to be factored into my model for understanding how Genesis 1:26 and Paul’s epistles fit together.

Now: Fire away.


Sunday Lunch

24 January 2011

This, friends, is LIFE.

The Sandwich:  Six ounces of 85% lean ground chuck, grilled medium, rested 5 minutes, with roasted green chile, fresh avocado, red onion, Grey Poupon, and red leaf lettuce.

The Beverage:  Widmer Brothers “Broken Halo” IPA.

The Result:  Ecstasy.


Rethinking “Inspiration” – 05; Looking to Ourselves

31 December 2010

We began the series trying to establish some benchmarks, and it seems appropriate to revisit them here from another angle before pressing too far ahead.  Far more important than any other benchmark is to establish precisely what it is that we are asking.

In that regard, then, qb affirms that our canonical scriptures are divinely inspired and that, with appropriate attention to the limitations in scope that Kevin Davis has spelled out thus far in his responses, the writer of II Timothy 3:16 – whom I am delighted to call “Paul” – the scriptures are indeed profitable “for teaching, rebuking, correcting, and training in righteousness, so that the servant of God may be thoroughly equipped for every good work” (NIV).

Our question is not so much whether or not the scriptures are divinely inspired, but rather:  what coherent meaning can we reliably attach to the word “inspired” vis-a-vis some agreed extent of the canon to which we take the word to apply? Or, to put it another way:  What narrative of “inspiration” best fits the data we have before us concerning the degree of our canon’s internal cohesion, its multiplex literary nature, and its historical-critical liabilities?  Are the findings of modern, Enlightenment-driven criticism in its various forms fatal to the orthodox notion of “inspiration,” or is there a credible, substantive understanding of “inspiration” that accounts for those liabilities and yet retains enough strength to undergird the canon’s practical utility as a potent vessel of the divine will?

To repeat, then:  we are not necessarily rehearsing the arguments over the extent of the canon per se.  qb has little interest (here, anyway) in exploring the question of which scriptures belong in the canon and which do not.  This is another way of stipulating, to be pithy, that we can identify a canonical ensemble of works to which all both of my readers can agree, and start there.

We then outlined a couple of possibilities by looking at Caravaggio’s two versions of “St. Matthew and the Angel” and one from Rembrandt.  The most popular evangelical view appears to be some variant or mixture of #1 and #2, with YHWH dictating some of it – or even most of it – but giving rein to at least some of the authors such that any resulting errors or contradictions are the fruit of human fallenness but do not impinge upon the overall reliability of the canon as an expression of the will of YHWH.  This understanding of “inspiration” appears to qb to be little more sophisticated than fitting a spline to a series of ordered pairs and coming up with R^2=1.000; of course we can come up with an explanation for apparent excursions by adding more parameters or discretizing our domain into more intervals (e. g. going from chapter-by-chapter theories of inspiration to verse-by-verse theories, and so forth).  But in doing so what we gain in apparent accuracy we lose in actual explanatory power.  That is, we eventually interpolate perfectly but we lose any credible ability to extrapolate.  And given our apparent need to plumb scripture to address a breathtaking array of modern and postmodern quandaries, ethical conundra, and extra-canonical discoveries, what we really need is a basis for extrapolating scripture with a sense of narrative coherence.

(Yes:  qb presupposes narrative coherence as a fundamental property of the word of God.  Those of you who wish to hoist qb on the petard of logical circularity have every right to do so, but only after you tell us whether or not you, too, presuppose a coherence to scripture, narrative or otherwise.)

Evangelical Protestants (speaking of the popular masses, here) seem to have adopted a weighted average of qb’s #1 and #2, with the weighting factor being different for at least each pericope, if not for each verse or phrase.  And qb’s gut feeling is that we organize ourselves into groups – congregations – at least in part on the basis of a common weighting scheme, ultimately represented by the scheme adopted by the dude in the pulpit each Sunday.


Sir William of Ockham was surely no great fan of splines, and he would not look kindly upon an arbitrarily fine-grained theory of inspiration, either.  No, there must be a way of understanding “inspiration” that is more parsimonious with the number of parameters, more broadly applicable, more potently predictive, and truer to the nature of the canon and the story that it tells.  Recall:  we have stipulated that our consensus canon is God’s primary, or at least most objectively and transparently available, means of communicating his will to his creatures.  So it probably follows that how God reveals himself to us is consistent with his character and his intentions for and of us.


To make much headway, though, it occurs to qb that several other gremlins lurk in the shadows and brambles.  The first one is likewise analogous to splining.  Just as a mathematician must specify a priori one or more of the derivatives of the first and last functions at the two extrema, it seems probable that we devise our pet theories of inspiration, at least tacitly, in the service of some practical or ideological agenda.  That is, we have a data set that is more or less fixed and independent (this would consist of the canonical data plus whatever critical and extra-canonical data we might agree to bring to the table) and publicly in view; and we have certain other data points or logical trajectories that are not publicly known and not canonical in themselves but through which we feel constrained to send our spline in order to convince ourselves that our model really *does* have explanatory, extrapolative power.  This, friends, is known as self-delusion, and we must be ruthlessly transparent with ourselves and with one another about where these dangers lie in our own cases.  For example:  if we have decided a priori that “inspiration” must be interchangeable with “inerrant” even though “inerrancy” is a modernist category rather foreign to the cultures from which the canon sprang, we need to ‘fess up.  Now.  Or if our agenda includes an a priori commitment to the persistence of a certain denominational order or institutional hierarchy, we need to be aware of that as well (“Yikes, qb, if we go *there*, what do we do with the whole Campbellite tradition?”).


I don’t mean to muddy the waters too much, although by now you may think I’m a hopeless case.  But we noted that we all have a pretty sizable stake in this discussion, whether we are aware of it or not, and we mustn’t be too careless in preparing the field of battle.  So:  I’ll show you mine if you’ll show me yours.


Rethinking “Inspiration” – 04 (abstract)

31 December 2010

Kevin Davis’ thoughtful and generous reply to a previous post on this topic demands a thoughtful answer, which will come in due course.  Now that the holidays are coming to a close, I expect to return to the thread in a couple of weeks after I re-engage with some of the scholarly literature on the subject of biblical inspiration.  In particular, I intend to spend some quality time with the late Catholic theologian Raymond E. Brown, “‘And the Lord said?’  Biblical reflections on scripture as the word of God,” Theological Studies 42(1):3-19, 1981.  It seems to me that Father Brown presents a serious-minded, incarnational view of what we mean, or ought to mean, when we affirm (as we do) that our canonical scriptures are “the inspired word of God.”

Anyone with a desire to read Brown’s article in a manner that complies with the “fair use” provision of copyright law should feel free to contact me for a PDF.