Pur Autre Vie

I'm not wrong, I'm just an asshole

Monday, January 26, 2015

No True Technocrat

Today Paul Krugman wrote this (in the course of criticizing poor policy-making by EU officials):

This is one reason it irks me when the people who have been running Greece, or those in Brussels, are described as “technocrats.” Crat me no techno — real technocrats would (and did) warn about the downside of austerity, not seize eagerly on faddish research purporting to make a case for policies they probably wanted for other reasons.

Now, I recognize that Krugman is fighting about macroeconomics, not about the role of technocracy in society.  If he were actually debating the merits of technocracy vs. democracy or whatever, he would presumably express himself differently.

But - but! - I think this is a pretty good illustration of the way people often think about technocracy.  Technocrats are not just empowered bureaucrats, they are the good guys, the smart guys, the ones with white lab coats.  They are people like us.  So you see, there's nothing dangerous about technocracy at all.  When technocrats engineer a multi-year, staggeringly painful recession, then obviously they were never technocrats to begin with.  Because no true technocrat could do such a thing.

Wednesday, January 14, 2015

Taxes and Federalism

I'm not going to write a long post right now, but I just want to put down a marker that this article in the New York Times is exactly what you would expect:

States and localities have regressive systems because they tend to rely more on sales and excise taxes (fees tacked onto items like gas, liquor and cigarettes), which are the same rate for rich and poor alike. Even property taxes, which account for much of local tax revenue, hit working- and middle-class families harder than the wealthy because their homes often represent their largest asset.
The federal income tax system, by contrast, primarily taxes individuals at a graduated rate, and those who earn more pay a larger share. (The federal system also uses payroll taxes to raise large sums for Social Security and Medicare, dipping into the pockets of many low- and moderate-income Americans who pay little, if any, income tax.)
At the local level, if you tax rich people, you will drive them away, because you are in vicious competition with nearby (and not-so-nearby) jurisdictions.   The federal government, on the other hand, has much more leeway to impose progressive taxation.

One potential solution is to impose higher taxes at the federal level and then distribute them to local governments on a per capita basis.  This has the potential to counteract the race-to-the-bottom dynamic and allow local governments to do their job without imposing high taxes on the poor.

Interestingly, there is a good argument that it's important to lower marginal taxes at the low end of the income distribution, a result that conservatives in theory could get behind.  (In other words, in terms of incentive effects, taxes are more distortionary at the low end of the income spectrum, and so it is beneficial to trade away lower marginal tax rates at the low end for higher marginal tax rates at the high end.)  But because the conservative movement is so bound up with its rich constituency, I don't think you'll hear a lot of conservatives arguing for a systematically more progressive tax code.

Sunday, January 04, 2015

Beer and Public Policy

One phenomenon that is quite interesting to me is the huge increase in the number of breweries in the United States in the last few decades, which has occurred against a backdrop of declining beer consumption.  (I am too lazy to look up the data, but the increase in the number of breweries is beyond question.  I am not quite as confident about declining beer consumption, though that has definitely been reported in recent years.)

One way to think about it is this.  Imagine that each beer drinker has preferences that can be mapped in n-dimensional space.  For simplicity, let's pretend there are just 2 dimensions in beer preferences, since I'm pretty sure the results will generalize from there.  Just for concreteness, let's say the 2 dimensions are concentration of alcohol (ABV) and hoppiness (measured in international bitterness units, or IBUs).  And let's say that people's enjoyment of the beer diminishes as a function of the distance from their ideal point on the grid.  (So in other words, an individual's preferences are single-peaked.  I'm not sure this matters, but anyway I don't think the assumption does any harm.)

So you could generate a graph of people's preferences, with aggregate satisfaction on the z axis and ABV and IBUs on the x and y axes.  The result would be some kind of surface vaguely resembling a hill or mountain range.  There's no way to guess what this shape might be a priori (and remember, it's a highly simplified example to begin with), but if tastes are fairly stable, then there should be a peak on the graph where you can maximize the aggregate enjoyment of a brand of beer.  Of course there are other considerations - maybe hops are really expensive, maybe ABV is taxed or regulated in such a way that the brewers don't optimize that dimension purely in terms of consumer tastes.  And on top of that, it may be more profitable to brew an inoffensive beer (one that is acceptable to the highest percentage of drinkers) rather than a satisfaction-maximizing one.

But whatever considerations end up mattering, if there is only one brand of beer, it is going to be pretty far away from a lot of people's preferences (assuming there is much diversity of tastes).  And what happens as you add more brands of beer?  Well, this is a very complicated question, but one possibility is that the brands will cluster together for Hotelling's Law reasons.  So we might observe Budweiser, Miller, and Coors all brewing beers with similar characteristics, even though those beers are redundant in terms of preference-satisfaction.  And in fact, that's what we observed for years.  (This may be a good result for some beer drinkers, by the way.  If your tastes are well-served by weak lager with minimal hop character, then the intense competition around that point on the graph will keep prices low while giving you exactly the beer you want.  Other beer drinkers, though, are likely to regret that all of the major brands are focusing on a part of the graph far from their ideal style.)

Now, it's somewhat mysterious to me why that equilibrium persisted for as long as it did.  I'm no beer historian, but as far as I can tell, from the end of Prohibition (in 1933) to the advent of "microbreweries" in the early 1980s, there were very, very few brands of beer available in the United States that skewed very far from the light lager template.  Even imported beers were typically light lagers (think Heineken or Corona), with the notable exception of Guinness stout.

But then everything changed.  Brewers entered the market with beers that were very, very far from the existing cluster of mega-brands (a region of the graph in which no small brewer could hope to compete, I would think).  They brewed hoppy beers.  They brewed high-alcohol beers.  They brewed hoppy, high-alcohol beers.  And they experimented on dozens of other dimensions.  And in general they found a great deal of success, at least in the aggregate.  Individual breweries have failed, but "craft" breweries (which generally refers to small breweries making flavorful products) are commanding more and more of the beer market.  Certainly by the mid-90s the trend was noticeable, and in the last decade it has become utterly obvious.

And so if you think about our map of beer preferences, a much larger area of the map is being served by the U.S. brewing industry.  There is no plausible case that Hotelling's Law still applies on a large scale (though there may still be clustering within some regions of the graph).  There has been a profusion of styles readily available to U.S. beer drinkers.

It's fun to think about why things played out this way, and specifically why it took so long to shake off the old equilibrium.  One possibility is that beer drinkers had to be "educated" to enjoy certain styles of beer, which would explain the long gestation period and then (relatively) sudden explosion of interest in craft beer.  Another possibility is that Americans didn't have enough disposable income to pay the high prices commanded by craft beer.  (This is a little implausible as a full explanation, because plenty of European countries enjoyed a range of flavorful beer styles despite having less per-capita income than the U.S.  But it could be a partial explanation of the sticking power of the light lager equilibrium.)  Yet another possibility is that American brewers lacked the know-how to brew more interesting styles.  (In this account, Jimmy Carter's legalization of homebrewing in 1978 was the major turning point, because homebrewers learned to brew all kinds of styles as a hobby and then inevitably some of them went into business as microbrewers.  Certainly that fits the timeline, and I believe it's true that many of the early microbreweries were founded by homebrewers.)

Probably all of these things were factors.  I've kind of run out of steam, though, without building to any kind of coherent point.  I think what I was originally going to say was, whereas the marketplace can (in the right circumstances) allow for a kind of pluralism, in which a wide diversity of preferences are satisfied, the same is not true of public policy, where quite frequently you have to do the equivalent of brewing a single brand of beer for everyone.  And this means that you don't have the luxury of dismissing anyone's concerns, however idiosyncratic they might be.  So for instance, if someone complains to me that he only likes hoppy beers, then I will have no sympathy - his needs are abundantly satisfied by the market.  But if someone complains that he can't function in a society where gambling is legal, then I can't just tell him that he should stay out of casinos.  He knows that, but he can't control himself.  There may be a public policy that addresses his concerns while allowing other people to gamble (for instance, "pre-commitment," in which a gambler voluntarily puts his name on a list of people who aren't permitted to enter the casino).  But the point is that if you have to pick a single point on a graph, the choice becomes much more fraught and tragic than if you can simply allow a thousand flowers to bloom.

Wednesday, December 10, 2014

The Insider/Outsider Approach to Ideas

This is really just a pedestrian observation, one that ties together a few things I've written before.

The basic idea is this.  Facing a scarcity of resources, we have to make compromises.  One of those compromises is that intellectual approaches are more rigid and more "top-down" than they would ideally be, which in turn creates an "insider/outsider" dynamic reminiscent of Straussian "esoteric" writing.

I'll give an example that will hopefully illustrate my point.  Sometime just before the middle of the 20th century, economists developed the concept of national income and started to measure it rigorously.  I don't want to imply that people like Simon Kuznets, who played a central role, did a bad job.  Quite the contrary:  they did such a good job that I believe their concepts have been handed down with little modification to the present day, and their work ushered in the modern era of careful measurement of economic activity.

Of course the concept of national income involved a lot of careful tradeoffs, and one can (in theory) criticize those tradeoffs, or at least take them into account when using the concepts.  But here's the crucial point:  economics students are generally taught national income concepts as a kind of received wisdom, the same way students learn about electrons and species of animals:  GDP = C + I  + G + NX.

The instruction might include a few qualifications about how GDP isn't really the only thing we care about, with a few examples of how it can be misleading, but then the instruction continues as if GDP were a completely coherent and uncontested concept.  (By the way, I chose electrons and species because those, too, are conceptualizations that would, in a world with no scarcity, attract more scrutiny and a more nuanced approach.)

Now I want to emphasize, I think this is for the most part appropriate.  If anything I think undergraduate education often goes too far in the other direction.  Students are often asked, in introductory classes, to criticize the giants of the field.  (This is a complaint Tarun has leveled at undergraduate philosophy education.  Students are expected to come up with something original to say before they are equipped to understand the concepts, much less respond intelligently to them.)  It's a waste of time and inculcates an inappropriate arrogance, a lack of respect for deep reading.  Students actually think they've come up with a refutation of Kant or whatever...  and why wouldn't they?  Their instructor has given them every indication that this is an appropriate way to think about it.

Similarly it would probably be a waste of time to walk economics students through the intellectual history of national income statistics.  Most undergraduate economics students won't do any further work in the field.  And there is a sort of intellectual "division of labor" in which very smart people like Kuznets develop ideas that can then be used in an "off-the-rack" way within the field.  This is the essence of scarcity management:  Kuznets does the work once, and then the knowledge can be replicated millions of times without further effort.  It's mass production in the intellectual space.  The same approach is used in math, certainly, and probably in every area of study.

But there is a danger that a gap will develop between people who think they understand a concept and people who have a true appreciation of the history of the idea.  People who only understand ideas as received wisdom (which is most people) struggle to accommodate criticisms of the framework without really having the tools to do so.  When they identify weaknesses, they think they're treading new ground, or they think they've been sold a bill of goods.  Or, possibly, they remain ignorant of any weakness in their framework and they apply it robotically.

Just as an example, it was recently brought to my attention that the cutoff for "statistical significance" is arbitrary:

On some level, I knew this, but I wouldn't have been able to articulate it with any precision (I still can't, really, but now I know enough to invoke the concept and dig deeper if I need to).  In the statistics courses I've taken, statistical significance is exactly the kind of received wisdom that gives students enough to do most of what they need to do, but leaves them vulnerable to overconfidence and unable to critique their own field.

I don't have much more to say - I think these tradeoffs are pretty much unavoidable, and we can only hope that academics do a good job of picking the framework and providing at least some context so that students can, if they choose, learn more about the intellectual history of the concepts they are using.  "When the legend becomes fact, print the legend."  And similarly, "When the conceptualization becomes fact, print the conceptualization."  But it still feels elitist and a little gross to proceed this way.

Tuesday, December 09, 2014

Taxi Regulation and "Dirty" Politics

A quick comment on politics and taxi regulation.  Catherine Rampell has a thoughtful column in the Washington Post about the politics of taxi regulation, pointing out that deregulation has not been successful in the past:

Alas, as legal scholars such as Paul Stephen Dempsey and others later documented, this Wild West approach proved disastrous. Taxi entry surged. But, unexpectedly, prices rose in every single deregulated market. This likely happened for several reasons: Government regulations, it turns out, had been capping prices below market value (especially in underserved neighborhoods); fares were relatively opaque and unpredictable; and consumers were reluctant to price-shop or interrogate drivers about their insurance and safety records. They just hopped into the first available cab.

Over time, traffic and pollution became bigger issues, incomes fell as individual drivers secured fewer rides, and service declined. By the early 1990s, nearly every city had re-regulated.
I think these conclusions will come as a surprise to people who think that Uber is by definition good because it increases competition and is innovative and blah blah blah.  I am not particularly surprised by the findings, particularly when it comes to price increases.  Uber's pricing is highly opaque and in general seems to be about double what you would pay for a taxi in New York.  It's a mystery to me why this is regarded as "consumer friendly," unless you are restricting your analysis to particularly well-heeled consumers.

But leave all that aside.  The point I want to make has to do with the politics of deregulation.  Rampell notes that "Nevada recently banned Uber — after the company reportedly failed to obey laws relating to licensing, vehicle inspections and insurance — in a move widely interpreted as being orchestrated by Big Taxi."

I think a lot of people are repulsed by the idea that public policy might be driven by interest groups such as taxi drivers or medallion owners.  The mere fact that taxi regulation benefits incumbents might seem like reason to oppose it (though of course almost every existing regulation, as opposed to new regulation, benefits incumbents, pretty much by definition).  But here is the key point:  there is a kind of physics of politics that essentially requires that interest groups will drive these sorts of discussions.

Occasionally politicians take the initiative and make good policy for the sake of good policy.  (I think William Gladstone was an exemplar of this brand of politics, though he wasn't above trying to buy off key constituencies).  But politicians aren't generally going to stand up to behemoths like Uber unless they have some concrete reason for doing so.  And even the best policy is not going to last long unless it enjoys broad popularity or has a strong constituency behind it.  Something like taxi regulation is far too nuanced and arcane for the general public to form strong opinions about, and so you are left looking for other sources of political support.

And here I think we have a way of thinking about what it means to be a good politician.  A good politician doesn't sacrifice his or her career in futile defense of, for instance, sensible taxi regulation.  But a good politician tries to find ways to mobilize a constituency in favor of good policy.  In other words, it doesn't have to be the case that if Uber spends 51x on lobbying, and the taxi industry spends 49x, that Uber will get what it wants.  51/49 is close enough for a politician to make the right decision.  But very few politicians can carry the banner of good policy if the ratio goes to something like 95/5.  If you believe that taxi regulation serves some public purpose, then you have to go out and find its natural allies and assemble them into a constituency that can sustain good policy.  (By the way, this doesn't have to be about money.  Quite often it's simply a matter of what opinions are voiced.  This is why it's so irritating that many people, particularly in the media, cast taxi regulation as a fight between innovative, high-tech Uber and the evil taxi cartel.)

And this goes into policy design at all stages.  If you want sustainable redistribution, you better find a way to build a lasting constituency that will support it (this is the logic behind the universality of Social Security and Medicare - see this old Sarang post).  If you want environmental regulation, you better make sure it is in someone's interest to keep the regulations strong.  In a lot of cases this is not a particularly difficult exercise, but it is an important aspect of the art of politics, and there is nothing whatsoever dirty about it.  Or at least, it is no dirtier than politics must be.

Sunday, December 07, 2014

More Light

Why is winter colder than summer?

Obviously the days in winter are shorter, thanks to the tilt of the earth.  But this is only part of it.  During the winter the sun is lower in the sky.  Partly, that weakens the sunlight because it has to pass through more atmosphere to get to the surface.  (Although query whether that would really make things colder, on average.  If the sunlight's energy is getting absorbed in the atmosphere, it's still making it warmer somewhere.)

 But part of it is simply that the sunlight is coming in at a low angle, so that each unit of sunlight is spread out over a larger area of land, reducing the energy falling onto each square meter of the earth's surface.

Now all of this might seem very obvious.  I point it out because, atmospheric absorption aside, the angle of the sunlight doesn't affect how much you get when you are walking around outside.  If anything, you probably get more sunlight when it is coming in horizontally than you do when it is coming down from nearly straight up, at least if you are standing upright.  I think the sunlight is meaningfully weaker because of atmospheric absorption (which is why, I assume, it is less likely to burn your skin), but two of the big factors that make it cold in winter (short days and less sunlight per square meter) are non-factors in terms of your own exposure to sunlight.

Tuesday, November 18, 2014

How We Know What We Know

Here's one way to think about knowledge, truth, etc.  It's an artificial thought experiment, but it's intended to focus attention on what I think are important issues.  It relies on two analogies, which are really variants of the same thought.

For the first analogy, imagine that we have dozens (or hundreds or thousands, doesn't really matter) of machines.  Each machine is like a slot machine:  you pull a lever, and a variable amount of money comes out.  However, these machines have a few unique features:

1.  Each machine has two levers.  You can pull either of these levers (but not both) once per hour (or whatever - assume we have the capacity to use each machine in such a way that they never "go to waste," that is, go unused when they could have given a payout).  It doesn't cost anything to play.

2.  The machines are completely opaque, in the sense that you can't tell what is going on inside by direct observation.  The only information you can obtain about each machine is a list of its historical payouts.  The list indicates which lever was pulled for each payout.  It is impossible to determine what the payout would have been if the other lever had been pulled.

Now let's assume that we always want to pull the lever that results in a higher payout.  At first, we will have no choice but to pull levers more or less at random.  (We have no basis for predicting which lever has a higher payout.)  As we gather data, we can consider how to use it.  For instance, one machine might always pay $1 if the left lever is pulled, but $2 if the right lever is pulled.  We should pull the lever on the right side, not the left.  Of course I am using "always" in a limited sense.  We can't rule out the possibility that in the next round this machine will pay $100 from the left or right lever.  "Always" is a backward-looking statement.

Now assume that everyone agrees on the state of affairs I've described, so there are no radical skeptics or anything when it comes to the basic situation.  (No one is saying things like, "How do we know there are really machines?")  However, people take different attitudes to what we can know about the machines' payouts.  Some people hypothesize an "underlying reality" that is built into the machines and that we can model using quantitative tools, giving us access to "the truth" about the function determining the payout stream.  Many of our theories may be wrong or imprecise, but there is a truth "out there" that we are capable of discovering through empirical investigation.  Other people think that any knowledge derived in this way is contingent at best and relies on unfounded assumptions about the degree to which the future will resemble the past.  To these people, "the truth" never comprises an absolute grasp of what is inside the machine (which is fundamentally inaccessible to us), but rather is a more complicated function of usefulness and "fit" with observational data.  Other people might think that there is no basis for any prediction, or for any knowledge about future payouts, because it is impossible to tell whether a machine will diverge from its historical pattern (as machines frequently do, even if they had previously been stable for years).  There is a good reason to pull a lever, but no good reason to pull any particular lever at any time.  We have no access to "the truth" and maybe it doesn't exist.  It is true that some machines seem to be utterly reliable.  But on the other hand, sometimes machines that seem utterly reliable start behaving weirdly, and we have no demonstrably effective way of sorting reliable machines from unreliable ones.

The second analogy is basically the same thing.  We are playing a video game.  We have access only to inputs and outputs, not the source code.  So in other words, we can make our characters jump across the screen, and we can propose a sort of theory of the physics underlying the video game world.  But the physics might change in ways that are unpredictable from level to level.  (Suddenly the coefficient of friction on the ground is much lower - it is an "ice" level.)  And the physics might even change on replaying a level.

You can imagine the same attitudes forming as in the previous example.  (Really the two examples are just about identical.)

Now I think one thing to note is that it seems respectable to deny that we have access to some kind of absolute "truth" and nevertheless to believe that we can do "better than random" when pulling the levers.  We can doubt whether we will ever reverse engineer the "one true source code," and yet we can navigate the video game world.  I don't think our only choices are at the extremes.  In fact, I think the extremes are more or less untenable, although there's room for disagreement.