Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

The total number of American workers who usually commute by transit declined from 7.65 million in 2016 to 7.64 million in 2017. This continues a downward trend from 2015, when there were 7.76 million transit commuters. Meanwhile, the number of people who drove alone to work grew by nearly 2 million, from 114.77 million in 2016 to 116.74 million in 2017.

These figures are from table B08301 of the 2017 American Community Survey, which the Census Bureau posted on line on September 13. According to the table, the total number of workers in America grew from 150.4 million in 2016 to 152.8 million in 2017. Virtually all new workers drove to work, took a taxi-ride hailing service, or worked at home, as most other forms of commuting, including walking and bicycling as well as transit, declined.

Transit commuting has fallen so low that more people work at home now than take transit to work. Work-at-homes reported for 2017 total to nearly 8.0 million, up from just under 7.6 million in 2016. 

Two other tables, B08119 and B08121, reveal incomes and median incomes of American workers by how they get to work. A decade ago, the average income of transit riders was almost exactly the same as the average for all workers. Today it is 5 percent more as the number of low-income transit riders has declined but the number of high-income – $60,000 or more – has rapidly grown. Median incomes are usually a little lower than average incomes as very high-income people increase the average. In 2017, the median income of transit riders exceeded the median income of all workers for the first time.

For those interested in commuting numbers in their states, cities, or regions, I’ve posted a file showing commute data for every state, about 390 counties, 259 major cities, and 220 urbanized areas. The Census Bureau didn’t report data from smaller counties, cities, and urbanized areas because it deemed the results for those areas to be less statistically reliable. 

The file includes the raw numbers plus calculations showing the percentage of commuters (leaving out people who work at home) who drive alone, carpooled, took transit, (with rail and bus transit broken out separately), bicycled, and walked to work. A separate column shows the percentage of the total who worked at home. The last column estimates the number of cars used for commuting including drive alones and carpoolers.

I’ve also posted similar files for 20162015201420102007 and 2006. The formats of these files may differ slightly as I’ve posted them at various times in the past. Soon, I’ll post more files for commuting by income and other pertinent topics. 

The total number of American workers who usually commute by transit declined from 7.65 million in 2016 to 7.64 million in 2017. This continues a downward trend from 2015, when there were 7.76 million transit commuters. Meanwhile, the number of people who drove alone to work grew by nearly 2 million, from 114.77 million in 2016 to 116.74 million in 2017.

These figures are from table B08301 of the 2017 American Community Survey, which the Census Bureau posted on line on September 13. According to the table, the total number of workers in America grew from 150.4 million in 2016 to 152.8 million in 2017. Virtually all new workers drove to work, took a taxi-ride hailing service, or worked at home, as most other forms of commuting, including walking and bicycling as well as transit, declined.

Transit commuting has fallen so low that more people work at home now than take transit to work. Work-at-homes reported for 2017 total to nearly 8.0 million, up from just under 7.6 million in 2016. 

Two other tables, B08119 and B08121, reveal incomes and median incomes of American workers by how they get to work. A decade ago, the average income of transit riders was almost exactly the same as the average for all workers. Today it is 5 percent more as the number of low-income transit riders has declined but the number of high-income – $60,000 or more – has rapidly grown. Median incomes are usually a little lower than average incomes as very high-income people increase the average. In 2017, the median income of transit riders exceeded the median income of all workers for the first time.

For those interested in commuting numbers in their states, cities, or regions, I’ve posted a file showing commute data for every state, about 390 counties, 259 major cities, and 220 urbanized areas. The Census Bureau didn’t report data from smaller counties, cities, and urbanized areas because it deemed the results for those areas to be less statistically reliable. 

The file includes the raw numbers plus calculations showing the percentage of commuters (leaving out people who work at home) who drive alone, carpooled, took transit, (with rail and bus transit broken out separately), bicycled, and walked to work. A separate column shows the percentage of the total who worked at home. The last column estimates the number of cars used for commuting including drive alones and carpoolers.

For comparison, you can download similar files for 20162015201420102007 and 2006. The formats of these files may differ slightly as I’ve posted them at various times in the past. Soon, I’ll post similar files for commuting by income and other pertinent topics.

The total number of American workers who usually commute by transit declined from 7.65 million in 2016 to 7.64 million in 2017. This continues a downward trend from 2015, when there were 7.76 million transit commuters. Meanwhile, the number of people who drove alone to work grew by nearly 2 million, from 114.77 million in 2016 to 116.74 million in 2017.

These figures are from table B08301 of the 2017 American Community Survey, which the Census Bureau posted on line on September 13. According to the table, the total number of workers in America grew from 150.4 million in 2016 to 152.8 million in 2017. Virtually all new workers drove to work, took a taxi-ride hailing service, or worked at home, as most other forms of commuting, including walking and bicycling as well as transit, declined.

Transit commuting has fallen so low that more people work at home now than take transit to work. Work-at-homes reported for 2017 total to nearly 8.0 million, up from just under 7.6 million in 2016. 

Two other tables, B08119 and B08121, reveal incomes and median incomes of American workers by how they get to work. A decade ago, the average income of transit riders was almost exactly the same as the average for all workers. Today it is 5 percent more as the number of low-income transit riders has declined but the number of high-income – $60,000 or more – has rapidly grown. Median incomes are usually a little lower than average incomes as very high-income people increase the average. In 2017, the median income of transit riders exceeded the median income of all workers for the first time.

For those interested in commuting numbers in their states, cities, or regions, I’ve posted a file showing commute data for every state, about 390 counties, 259 major cities, and 220 urbanized areas. The Census Bureau didn’t report data from smaller counties, cities, and urbanized areas because it deemed the results for those areas to be less statistically reliable. 

The file includes the raw numbers plus calculations showing the percentage of commuters (leaving out people who work at home) who drive alone, carpooled, took transit, (with rail and bus transit broken out separately), bicycled, and walked to work. A separate column shows the percentage of the total who worked at home. The last column estimates the number of cars used for commuting including drive alones and carpoolers.

For comparison, you can download similar files for 2016, 2015, 2014, 2010, 2007and 2006. The formats of these files may differ slightly as I’ve posted them at various times in the past. Soon, I’ll post similar files for commuting by income and other pertinent topics.

Dan Cadman of the Center for Immigration Studies (CIS) has written a blog post purporting to identify issues in a short brief that I wrote about U.S. citizens in Texas for whom ICE filed detainers. In it, he makes numerous inaccurate and unsupported assertions. Cadman presents zero evidence to rebut the conclusion of the brief and instead accuses an ICE supervisory officer of perjury because his statements fail to support Cadman’s position.

My brief uses data from Travis County, Texas to identify people who claimed U.S. citizenship and presented Social Security Numbers to local authorities, but ICE submitted a detainer request for them anyway, only to later cancel or not execute it. Cadman responds:

While it’s true that people who later prove to be U.S. citizens sometimes find themselves in removal proceedings (something I’ve previously commented on and explained), most often this occurs because an individual doesn’t even know he is a U.S. citizen…

In his link in support of his “most often” claim, he cites a single case where the person didn’t know he was a U.S. citizen, while we know of many individual cases in which detainers were filed for U.S. citizens who asserted their citizenship at the start of the process (here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, here, etc.). In any case, every person in my brief asserted U.S. citizenship at the outset from the time of their booking by Travis County Sheriff’s Office until ICE finally cancelled their detainer. Cadman continues:

[Bier] would have us believe that ICE agents actively “target” American citizens even though it is clear that they have no hand at all into what individuals are arrested by police and booked into Travis County (or any Texas) jail, and merely respond to the information passed to them as a consequence.

I never claimed that ICE agents “actively” seek out people who they know are American citizens. As I wrote in the executive summary of my brief, I state that these are “mistakes” that ICE only belated attempts to correct. In any case, if a law enforcement agency arrests hundreds of innocent people, it is perfectly legitimate to say that hundreds of “innocent people” were targeted by that agency, even if the individual agents didn’t know or intend to target innocent people. Moreover, it is incorrect to claim that ICE agents “merely respond to information passed to them”—Travis County Sheriff’s Office doesn’t make assessments of removability or citizenship, nor do they issue detainers. ICE makes those determinations.

Cadman attempts to argue that even though ICE canceled the detainers for these people, we cannot suppose that it was because they were U.S. citizens. He attempts to sketch out what he believes is happening:

ICE agents don’t, nor should they, always accept such assertions [of U.S. citizenship] at face value because they know the frequency with which false claims are made. One strategy they exercise is to immediately file the detainer while concurrently obtaining the release date of the individual being held by the police. They then work against the clock to either verify the claim or disprove it… . Keep in mind that when ICE agents withdraw a detainer, it doesn’t mean the claim isn’t false — it just means they couldn’t break it in the time frame they had to investigate.

If this is what ICE agents are doing, it would violate current ICE policies, which require agents to issue detainers based on what they believe to be is “probable cause” of removability. A simple assertion of U.S. citizenship would never overcome a determination based on actual probable cause (such as a biometric record of a prior deportation). In the bad days before even agent-determined probable cause was required, an assertion of U.S. citizenship would not have triggered cancelation either. Again, ICE would require the U.S. citizen to substantiate the claim first.

Cadman’s scenario implies that ICE agents are issuing detainers for people claiming U.S. citizenship based on their gut instincts and then hoping to prove that the person is lying before they are released. If this is what is occurring, it would indeed explain why U.S. citizens are regularly targeted by ICE as well as showing that the agency is breaking its own policy. That is a poor defense of ICE’s actions.

In any case, my brief quoted court testimony under oath from ICE Supervisory Detention and Deportation Officer John Drane from Rhode Island stating that, in fact, a detainer canceled for a person claiming U.S. citizenship is almost certainly because they were a U.S. citizen. Cadman responds:

while even ICE agents in the northeast would not be completely immune to the phenomenon of false claims, the claims would be of a significantly smaller scale and different character from those in Texas. This would certainly have had an impact on how Drane framed his response to the question of withdrawing a detainer, because his experiences would be nothing like those of ICE agents working in south or central Texas.

This is simply incorrect. The rate of U.S. citizenship claims overall was actually higher in Rhode Island around this time (7.2 percent) than in Travis County (5.7 percent), so Drane dealt with the same issue: some people do make false claims, while others, including the litigant in the case, make valid claims of U.S. citizenship when targeted with detainers. Cadman continues:

The time frame of Drane’s deposition (April 2015) is also significant. In November 2014, President Obama and then-Homeland Security Secretary Jeh Johnson announced a host of new “executive actions” that would govern how immigration agencies administered their responsibilities… . . many detainers were withdrawn as not meeting the new criteria of criminality drawn up by Secretary Johnson and his cohorts… .

Cadman presents no data or even anecdotes to support the claim that many detainers were withdrawn due to the Jeh Johnson enforcement criteria. In fact, the Johnson policies changed the criteria for issuing a detainer, so detainers for people who were not subject to enforcement priorities were not issued to begin with, leading to a significant decline in detainers issued. In any case, 90 percent of the U.S. citizens identified in my brief were targeted before Johnson’s new enforcement priorities were in effect or after the Trump administration rescinded them. In addition, the rate of cancelations for people claiming U.S. citizenship actually decreased during those years. Cadman continues:

It’s not a surprise that Drane avoided speaking to these very real, very major reasons that many detainers were withdrawn by ICE. One can surmise that he sidestepped the issue of agents being obliged to cancel detainers under the imposed-from-above priority system for fear of his job.

Here, Cadman actually accuses an ICE supervisory agent of lying under oath to avoid disclosing the reasons for the detainer cancelations. I don’t understand how Cadman can have complete faith in ICE under some circumstances while assuming the worst about them in others without any evidence. More importantly, Cadman’s claims about Drane are simply false. He has zero incentive to lie. The Obama administration was not hiding its looser enforcement policies in 2015—it was bragging about them. More importantly, in the context of this case, Drane is admitting something that would place blame on his office for wrongfully targeting U.S. citizens—something that the Obama administration would certainly not want to disclose. Lastly, why would he risk potential jail time by perjuring himself on this point? It simply makes no sense. Cadman concludes:

Bier has taken what are clearly dubious conclusions about the number of U.S. citizens against whom detainers were filed in the Travis County jail after arrest for criminal offenses, and then through extrapolation and aggregation, applied them to assert that, if this many were caught up in ICE “targeting” of citizens in the county, then as a matter of simple multiplication one can derive how many U.S. citizens must have been “targeted” statewide… . . Each county and each state is sufficiently unique in population and demographics that using any one of them to extrapolate to a whole is different entirely than using legitimate random sampling techniques.

Cadman is correct that a state-wide random sample would provide far more useful data. Every county in Texas should release this information if they have it. But the data that we do have allow us to learn something about Travis County, at a minimum. Maybe Travis County is an outlier in either direction, we simply don’t know, but I never claimed that my extrapolation from Travis County to the whole state of Texas is anything but an estimate.

Travis County, Texas is the third largest recipient of detainers in the state of Texas, providing a significant sample of the detainers in the state. Moreover, the dynamics in Travis County are substantially similar to other counties in Texas—all are fairly close to the border and all are subject to Texas law with regard to immigration enforcement. Cadman takes issue with my hedging this extrapolation, but that is simply what prudent analysts do when the evidence is incomplete.

My brief shows that ICE often issues detainer requests for people who claim U.S. citizenship and present Social Security Numbers to local authorities, only to then cancel those requests. The best explanation—based on ICE policies and ICE testimony—is that ICE issued detainers for hundreds of U.S. citizens. It is noteworthy that ICE itself in a statement to the Washington Post did not use any of Cadman’s poor defenses, but only asserted that it works to improve its processes over time. That may be true, but severe deficiencies still remain.

Not long after the limited-government U.S. Constitution was ratified and the new government resumed operation, numerous political leaders began pushing to expand federal power. Leading politicians of the 1790s did not agree with each other about the proper scope of federal authority, either legally or practically.

Treasury Secretary Alexander Hamilton proposed ideas for top-down manipulation of the economy. And fellow Federalist President John Adams signed into law the infamous Alien and Sedition Acts in 1798, which among other things outlawed any “false, scandalous and malicious writing” against the government, the Congress, and the president.

An article in the Washington Post the other day discussed some interesting details regarding the enforcement of the sedition statute:

Adams and his Federalist Party supporters in Congress passed the Alien and Sedition Acts under the guise of national security, supposedly to safeguard the nation at a time of preparing for possible war with France. The “Alien” part of the law allowed the government to deport immigrants and made it harder for naturalized citizens to vote. But the law mainly was designed to mute backers of the opposition Democratic-Republican Party led by Thomas Jefferson, who also happened to be the vice president. Jefferson had finished second to Adams in the 1796 presidential election and again ran against him in 1800.

An early target of the new law was Rep. Matthew Lyon, who had accused Adams of “ridiculous pomp.” In the fall of 1798 the government accused the Vermont congressman of being “a malicious and seditious person, and of a depraved mind and a wicked and diabolical disposition.” He was convicted of sedition, fined $1,000 and sentenced to four months in prison. Lyon campaigned for reelection from jail and won in a landslide. On his release in February 1799, supporters greeted him with a parade and hailed him as “a martyr to the cause of liberty and the rights of man.”

… Another target was James Callender, a pro-Jefferson journalist for the Richmond Examiner and the man who had exposed Federalist Alexander Hamilton’s extramarital affair. In 1800, Callender wrote an election campaign pamphlet that said of Adams: “As President he has never opened his lips, or lifted his pen, without threatening and scolding; the grand object of his administration has been to exasperate the rage of contending parties … and destroy every man who differs from his opinions.” Callander was convicted of sedition, fined $200 and sent to federal prison for nine months. He continued to write from his prison cell, calling Adams “a gross hypocrite and an unprincipled oppressor.”

… The government also came after critics of some members of the Adams administration, such as Treasury Secretary Hamilton. In 1799, Charles Holt, editor of the New London Bee in Connecticut, published an article accusing Hamilton of seeking to expand the U.S. military into a standing army. He also took personal jabs at Hamilton, asking, “Are our young officers and soldiers to learn virtue from General Hamilton? Or like their generals are they to be found in the bed of adultery?” The government promptly charged Holt with being a “wicked, malicious seditious and ill-disposed person — greatly disaffected” to the U.S. government. He was fined $200 and sent to jail for three months.

The speech crackdown extended even to private remarks, as Luther Baldwin, the skipper of a garbage boat in Newark, discovered. In July 1798, while passing through Newark on his way to his summer home in Massachusetts, Adams rode in his coach in a downtown parade complete with a 16-cannon salute. When Baldwin and his buddy Brown Clark heard the cannon shots while drinking heavily at a local tavern, Clark remarked, “There goes the president, and they are firing at his arse.” Baldwin responded that he didn’t care “if they fired thro’ his arse.” The tavern owner reported the conversation, and both drinkers were fined and jailed for sedition.

Thomas Jefferson and James Madison led the opposition to the big government Federalist policies of the 1790s, and “in the end, widespread anger over the Alien and Sedition Acts fueled Jefferson’s victory over Adams in the bitterly contested 1800 presidential election.” Free speech was restored and the incoming president would focus on cutting the excess spending, taxes, and debt built up by the prior Federalist administrations.

Hardly a day goes by without a report in the press about some new addiction. There are warnings about addiction to coffee. Popular psychology publications talk of “extreme sports addiction.” Some news reports even alert us to the perils of chocolate addiction. One gets the impression that life is awash in threats of addiction. People tend to equate the word “addiction” with “abuse.” Ironically, “addiction” is a subject of abuse.

The American Society of Addiction Medicine defines addiction as a “chronic disease of brain reward, motivation, memory and related circuitry…characterized by the inability to consistently abstain, impairment in behavioral control, craving” that continues despite resulting destruction of relationships, economic conditions, and health. A major feature is compulsiveness. Addiction has a biopsychosocial basis with a genetic predisposition and involves neurotransmitters and interactions within reward centers of the brain. This compusliveness is why alcoholics or other drug addicts will return to their substance of abuse even after they have been “detoxed” and despite the fact that they know it will further damage their lives. 

Addiction is not the same as dependence. Yet politicians and many in the media use the two words interchangeably. Physical dependence represents an adaptation to the drug such that abrupt cessation or tapering off too rapidly can precipitate a withdrawal syndrome, which in some cases can be life-threatening. Physical dependence is seen with many categories of drugs besides drugs commonly abused. It is seen for example with many antidepressants, such as fluoxetine (Prozac) and sertraline (Zoloft), and with beta blockers like atenolol and propranolol, used to treat a variety of conditions including hypertension and migraines. Once a patient is properly tapered off of the drug on which they have become physically dependent, they do not feel a craving or compulsion to return to the drug.

Some also confuse tolerance with addiction. Similar to dependency, tolerance is another example of physical adaptation. Tolerance refers to the decrease in one or more effects a drug has on a person after repeated exposure, requiring increases in the dose.

Science journalist Maia Szalavitz, writing in the Columbia Journalism Review, ably details how journalists perpetuate this lack of understanding and fuel misguided opioid policies.

Many in the media share responsibility for the mistaken belief that prescription opioids rapidly and readily addict patients—despite the fact that Drs. Nora Volkow and Thomas McLellan of the National Institute on Drug Abuse point out addiction is very uncommon, “even among those with preexisting vulnerabilities.” Cochrane systematic studies in 2010 and 2012 of chronic pain patients found addiction rates in the 1 percent range, and a report on over 568,000 patients in the Aetna database who were prescribed opioids for acute postoperative pain between 2008 and 2016 found a total “misuse” rate of 0.6 percent. 

Equating dependency with addiction caused lawmakers to impose opioid prescription limits that are not evidence-based, and is making patients suffer needlessly after being tapered too abruptly or cut off entirely from their pain medicine. Many, in desperation, seek relief in the black market where they get exposed to heroin and fentanyl. Some resort to suicide. There have been enough reports of suicides that the US Senate is poised to vote on opioid legislation that “would require HHS and the Department of Justice to conduct a study on the effect that federal and state opioid prescribing limits have had on patients — and specifically whether such limits are associated with higher suicide rate.” And complaints about the lack of evidence behind present prescribing policy led Food and Drug Administration Commissioner Scott Gottlieb to announce plans last month for the FDA to develop its own set of evidence-based guidelines.

Now there is talk in media and political circles about the threats of “social media addiction.” But there is not enough evidence to conclude that spending extreme amounts of time on the internet and with social media is an addictive disorder. One of the leading researchers on the subject stresses that most reports on the phenomenon are anecdotal and peer-reviewed scientific research is scarce. A recent Pew study found the majority of social media users would not find it difficult to give it up. The American Psychiatric Association does not consider social media addiction or “internet addiction” a disorder and does not include it in its Diagnostic and Statistical Manual of Mental Disorders (DSM), considering it an area that requires further research.

This doesn’t stop pundits from warning us about the dangers of social media addiction. Some warnings might be politically motivated. Recent reports suggest Congress might soon get into the act. If that happens, it can threaten freedom of speech and freedom of the press. It can also generate biliions of dollars in government spending on social media addiction treatment.

Before people see more of their rights infringed or are otherwise harmed by unintended consequences, it would do us all a great deal of good to be more accurate and precise in our terminology. It would also help if lawmakers learned more about the matters on which they create policy.

As Hurricane Florence spins toward the Carolina coast, the nation’s attention will be on the disaster readiness and response of governments and the affected communities. Have lessons been learned since the deeply flawed government response to Hurricane Katrina back in 2005?

I examined FEMA and the Katrina response in this study, discussing both the government failures and the impressive private-sector relief efforts.

Last year, Hurricane Maria devastated Puerto Rico, again exposing all sorts of government failures. Well-known chef José Andrés has a new book on the Maria response. He had an eye-opening experience on the island volunteering on relief efforts with his World Central Kitchen.

The Washington Post’s review of the book says that Andrés saw the flaws of top-down bureaucratic relief efforts and embraces more of a spontaneous order view of effective disaster relief:

With We Fed an Island, chef-and-restaurateur-turned-relief worker José Andrés doesn’t just tell the story about how he and a fleet of volunteers cooked millions of meals for the Americans left adrift on Puerto Rico after Hurricane Maria. He exposes what he views as an outdated top-down, para-military-type model of disaster relief that proved woefully ineffective on an island knocked flat by the Category 4 hurricane.

… ‘My original plan was to cook maybe ten thousand meals a day for five days, and then return home,’ Andrés writes. Instead, Andrés and the thousands of volunteers who composed Chefs for Puerto Rico remained for months, preparing and delivering more than 3 million meals to every part of the island. They didn’t wait for permission from FEMA.

… These grass-roots culinary efforts didn’t always sit well with administration officials or with executives at hidebound charities, in part because Andrés was no diplomat. He trolled Trump on Twitter over the situation on Puerto Rico. He badgered FEMA for large contracts to ramp up production to feed even more hungry citizens. He infamously told Time magazine that the “American government has failed” in Puerto Rico. A chef used to fast-moving kitchens, Andrés had zero patience for slow-footed bureaucracy, especially in a time of crisis.

… After dealing with so much red tape and mismanagement (remember the disastrous $156 million contract that FEMA awarded to a small, inexperienced company to prepare 30 million hot meals?), Andrés wants the government and nonprofit groups to rethink the way they handle food after a large-scale natural disaster. He wants them to drop the authoritarian, top-down style and embrace the chaos inherent in crisis. Work with available local resources, whether residents or idle restaurants and schools. Give people the authority and the means to help themselves. Stimulate the local economy.

‘What we did was embrace complexity every single second,’ Andrés writes. ‘Not planning, not meeting, just improvising. The old school wants you to plan, but we needed to feed the people.’

Andrés and World Central Kitchen have embraced complexity. 

Hail to the chef!

 

 

As of this writing, Tuesday, September 11, Hurricane Florence is threatening millions of folks from South Carolina to Delaware. It’s currently forecast to be near the threshold of the dreaded Category 5 by tomorrow afternoon. Current thinking is that its environment will become a bit less conducive as it nears the North Carolina coast on Thursday afternoon, but still hitting as a Major Hurricane (Category 3+). It’s also forecast to slow down or stall shortly thereafter, which means it will dump disastrous amounts of water in southeastern North Carolina. Isolated totals of over two feet may be common. 

At the same time that it makes landfall, there is going to be the celebrity-studded “Global Climate Action Summit” in San Francisco, and no doubt Florence will be the poster girl.

There’s likely to be the usual hype about tropical cyclones (the generic term for hurricanes) getting worse because of global warming, even though their integrated energy and frequency, as published by Cato Adjunct Scholar Ryan Maue, show no warming-related trend whatsoever.

Maue’s Accumulated Cyclone Energy index shows no increase in global power or strength.

Here is the prevailing consensus opinion of the National Oceanic and Atmospheric Administration’s Geophysical Fluid Dynamics Laboratory (NOAA GFDL): “In the Atlantic, it is premature to conclude that human activities–and particularly greenhouse gas emissions that cause global warming–have already had a detectable impact on hurricane activity.”

We’ll also hear that associated rainfall is increasing along with oceanic heat content. Everything else being equal (dangerous words in science), that’s true. And if Florence does stall out, hey, we’ve got a climate change explanation for that, too! The jet stream is “weirding” because of atmospheric blocking induced by Arctic sea-ice depletion. This is a triple bank shot on the climate science billiards table. If that seems a stretch, it is, but climate models can be and are “parameterized” to give what the French Climatologist, Pierre Hourdin, recently called “an anticipated acceptable range” of results.

The fact is that hurricanes are temperamental beasts. On September 11, 1984, Hurricane Diana, also a Category 4, took aim at pretty much the same spot that Florence is forecast to landfall—Wilmington, North Carolina. And then—34 years ago—it stalled and turned a tight loop for a day, upwelling the cold water that lies beneath the surface, and it rapidly withered into a Category 1 before finally moving inland. (Some recent model runs for Florence have it looping over the exact same place.) The point is that what is forecast to happen on Thursday night—a major category 3+ landfall—darned near happened over three decades earlier… and exactly 30-years before that, in 1954, Hurricane Hazel made a destructive Category 4 landfall just south of the NC/SC border. The shape of the Carolina coastlines and barrier islands make the two states very susceptible to destructive hits. Fortunately, this proclivity toward taking direct hits from hurricanes has also taught the locals to adapt—many homes are on stilts, and there is a resilience built into their infrastructure that is lacking further north.

There’s long been a running research thread on how hurricanes may change in a warmer world. One thing that seems plausible is that the maximum potential power may shift a bit further north. What would that look like? Dozens of computers have cranked away thousands years of simulations and we have a mixture of results: but the consensus is that there will be slightly fewer but more intense hurricanes by the end of the 21st Century. 

We actually have an example of how far north a Category 4 can land, on August 27, 1667 in the tidewater region of southeast Virginia. It prompted the publication of a pamphlet in London called “Strange News from Virginia, being a true relation of the great tempest in Virginia.” The late, great weather historian David Ludlum published an excerpt:

Having this opportunity, I cannot but acquaint you with the Relation of a very strange Tempest which hath been in these parts (with us called a Hurricane) which began on Aug. 27 and continued with such Violence that it overturned many houses, burying in the Ruines much Goods and many people, beating to the ground such as were in any ways employed in the fields, blowing many Cattle that were near the Sea or Rivers, into them, (!!-eds), whereby unknown numbers have perished, to the great affliction of all people, few escaped who have not suffered in their persons or estates, much Corn was blown away, and great quantities of Tobacco have been lost, to the great damage of many, and the utter undoing of others. Neither did it end here, but the Trees were torn up by their roots, and in many places the whole Woods blown down, so that they cannot go from plantation to plantation. The Sea (by the violence of the winds) swelled twelve Foot above its usual height, drowning the whole country before it, with many of the inhabitants, their Cattle and Goods, the rest being forced to save themselves in the Mountains nearest adjoining, where they were forced to remain many days in great want.

Ludlum also quotes from a letter from Thomas Ludwell to Virginia Governor Lord Berkeley about the great tempest:

This poore Country…is now reduced to a very miserable condition by a continual course of misfortune…on the 27th of August followed the most dreadful Harry Cane that ever the colony groaned under. It lasted 24 hours, began at North East and went around to Northerly till it came to South East when it ceased. It was accompanied by a most violent raine, but no thunder. The night of it was the most dismal time I ever knew or heard of, for the wind and rain raised so confused a noise, mixed with the continual cracks of falling houses…the waves were impetuously beaten against the shores and by that violence forced and as it were crowded the creeks, rivers and bays to that prodigious height that it hazarded the drownding of many people who lived not in sight of the rivers, yet were then forced to climb to the top of their houses to keep themselves above water…But then the morning came and the sun risen it would have comforted us after such a night, hat it not lighted to us the ruins of our plantations, of which I think not one escaped. The nearest computation is at least 10,000 house blown down.

It is too bad that there were no anemometers at the time, but the damage and storm surge are certainly consistent with a Category 4 storm. And this was in 1667, at the nadir of the Little Ice Age.

A Maryland story in the Washington Post last week presents a classic case of local political corruption. The broader message of the story is that when we give government the power to regulate an activity—in this case liquor sales—we open the door to corruption.

Even if you believe that regulatory regimes are created with good intentions, the politicians and officials in charge inevitably get swarmed by lobbyists and some of them will focus on lining their own pockets. With respect to the public interest, the resulting policy outcomes are a crapshoot.

Former Maryland state delegate Michael L. Vaughn (D) was sentenced to 48 months in federal prison Tuesday after he was convicted of accepting cash in exchange for votes that would expand liquor sales in Prince George’s County.

A jury found Vaughn guilty of conspiracy and bribery in March. During his six-day trial in U.S. District Court in Maryland, Vaughn and his attorneys argued that the bundles of cash he received from liquor store owners and a lobbyist in 2015 and 2016 were campaign contributions that he failed to report because he had personal financial problems.

But prosecutors for the government argued that the more than $15,000 that changed hands in a coffee shop bathroom, a dark restaurant and other locations throughout the county were bribes.

… Sentencing Judge Paula Xinis called Vaughn’s misconduct ‘exceptionally serious’ and ‘grievous bribery.’

Vaughn was one of seven arrested last year in a federal corruption case that investigators called “Operation Dry Saloon.” Liquor store owners, lobbyists, former liquor board commissioners and former Prince George’s County Council member William A. Campos (D) conspired to pass laws that would allow for Sunday liquor sales in the county in exchange for cash.

… Prosecutors, however, argued that Vaughn and former chief liquor inspector David Son hashed out a scheme in which local liquor store owners Young Paig and Shin Ja Lee would pay Vaughn $20,000 over two years to clear the way for Sunday sales.

… ‘He fully embraced the pay-to-play culture that has been a repeat phrase in this court for a decade,’ Windom said, alluding to the 87-month sentence former Prince George’s County executive Jack Johnson received for bribery and corruption.

Local governments have large and excessive power over private land development, and that power has long been a source of corruption. Here’s what the Washington Post said about Jack Johnson’s crimes in a 2011 story:

Jack Johnson, a Democrat who was county executive from 2002 until December 2010, came to the attention of federal authorities in 2006, when the FBI began investigating allegations of corruption, campaign finance violations and tax fraud. Authorities found massive corruption centered around a “pay-to-play culture” that began months after Johnson took office.

‘Under Jack Johnson’s leadership, government in Prince George’s County literally was for sale,’ the [sentencing] memo said.

The pay-to-play scheme involved several developers, including Laurel physician and developer Mirza H. Baig … In his plea agreement, Jack Johnson acknowledged accepting up to $400,000 from the scheme.

Johnson, 62, was charged last November with evidence tampering and destruction of evidence after federal agents arrested him and his wife, 59, at their Mitchellville home. They were overheard on a wiretap scheming to stash $79,600 in cash in Leslie Johnson’s underwear and flush a $100,000 check that Jack Johnson received as a bribe from a developer.

… On the day of their arrests, Johnson was at Baig’s office picking up a cash bribe and talking about how he would continue the corruption ‘through his wife’s new position on the county council,’ the memorandum said.

‘He proudly bragged about how he was going to orchestrate approval of various funding and approvals by the County Council for Baig’s projects,’ according to the memo.

Federal officials valued the benefits that Baig received in exchange for illegal payments to Johnson at more than $10 million on two development projects.

With public healthcare programs accounting for over a trillion dollars of federal spending, efforts to identify and remedy sources of waste are increasing. A new working paper finds: 

There is substantial waste in U.S. healthcare, but little consensus on how to identify or combat it. We identify one specific source of waste: long-term care hospitals (LTCHs). These post-acute care facilities began as a regulatory carve-out for a few dozen specialty hospitals, but have expanded into an industry with over 400 hospitals and $5.4 billion in annual Medicare spending in 2014. We use the entry of LTCHs into local hospital markets and an event study design to estimate LTCHs’ impact. We find that most LTCH patients would have counterfactually received care at Skilled Nursing Facilities (SNFs) – post-acute care facilities that provide medically similar care to LTCHs but are paid significantly less – and that substitution to LTCHs leaves patients unaffected or worse off on all measurable dimensions. Our results imply that Medicare could save about $4.6 billion per year – with no harm to patients – by not allowing for discharge to LTCHs.

The cost of healthcare in the United States remains a significant problem, but eliminating regulatory carve-outs such as LTCHs is one way to address this growing issue.

Research assistant Erin Partin contributed to this blog post.

 

Dedicated readers may recall my having reported here several years ago the suit filed by Colorado’s Four Corner’s Credit Union against the Kansas City Fed — after the Fed refused it a Master Account on the grounds that it planned to cater to Colorado’s marijuana-related businesses. Until then the episode was almost unique, for the Fed had scarcely ever refused a Master Account to any properly licensed depository institution. Eventually the Fed and Four Corners reached a compromise, of sorts, with the Fed agreeing to grant the credit union an account so long as it promised not to do business with the very firms it was originally intended to serve!

Well, as The Wall Street Journal’s Michael Derby reported last week, the Fed once again finds itself being sued for failing to grant a Master Account to a duly chartered depository institution. Only the circumstances couldn’t be more different. The plaintiff this time, TNB USA Inc, is a Connecticut-chartered bank; and its intended clients, far from being small businesses that cater to herbalistas, include some of Wall Street’s most venerable establishments. Also, although TNB is suing the New York Fed for not granting it a Master Account, opposition to its request comes mainly, not from the New York Fed itself, but from the Federal Reserve System’s head honchos in Washington. Finally, those honchos are opposed to TNB’s plan, not because they worry that TNB’s clients might be breaking Federal laws, but because of unspecified “policy concerns.”

Just what are those concerns? The rest of this post explains. But I’ll drop a hint or two by observing that the whole affair (1) has nothing to do with either promoting or opposing safe banking and (2) has everything to do with (you guessed it) the Fed’s post-2008 “floor” system of monetary control and the interest it pays on bank reserves to support that system.

What’s In a Name?

To understand the Fed’s concerns, one has first to consider TNB’s business plan. Doing that in turn means demolishing a myth that has already taken root concerning that enterprise — one based entirely on it’s name.

You see, “TNB” stands for “The Narrow Bank.” And some commentators, including John Cochrane, initially took this to mean that TNB was supposed to be a narrow bank in the conventional sense of the term, meaning one that would cater to ordinary but risk-averse depositors — like your grandma — by investing their money entirely in perfectly safe assets, such as cash reserves or Treasury securities. For example, the Niskanen Center’s Daniel Takash says that, if TNB wins its suit,

it would offer many businesses (and potentially consumers) the option [to] save their money in a safer financial institution and increase interest-rate competition in the banking industry.

Fans of narrow banking see it as a superior alternative to the present practice of insuring bank deposits while allowing banks to use such deposits to fund risky investments.

The assumption that TNB has no other aim than that of being a safer alternative to already established banks naturally makes the Fed’s opposition to it seem irrational: “Fed Rejects Bank for Being Too Safe,” is the attention-getting (but equally question-begging) headline assigned to Matt Levine’s Bloomberg article about the lawsuit. It seems irrational, that is, unless one assumes that Fed officials place other interests above that of financial-system safety. “That the Fed, which is a banker’s bank, protects the profits of the big banks’ system against competition, would be the natural public-choice speculation,” Cochrane observes. Alternatively, he wonders whether his vision of a narrow banking system might not be

as attractive to the Fed as it should be. If deposits are handled by narrow banks, which don’t need asset risk regulation, and risky investment is handled by equity-financed banks, which don’t need asset risk regulation, a lot of regulators and “macro-prudential” policy makers, who want to use regulatory tools to control the economy, are going to be out of work.

Get Lost, Grandma!

No one who knows me will imagine that I’d go out of my way to defend the Fed against the charge that it doesn’t always have the general public’s best interests in mind. Yet I’m compelled to say that explanations like Cochrane’s for the Fed’s treatment of TNB, let alone ones that suppose that the Fed has it in for safety-minded bankers, miss their mark. Such explanations badly misconstrue TNB’s business plan, especially by failing to grasp the significance of the declaration, included in its complaint against the New York Fed, that its “sole business will be to accept deposits only from the most financially secure institutions” (my emphasis).

You see, despite what Cochrane and Levine and some others have suggested, TNB was never meant to be a bank for me, thee, or the fellow behind the tree. Nor would it cater to any of our grandmothers. And why would it bother to? After all, unless grandma keeps over $250,000 in her checking account, her ordinary bank deposit is already safer than a mouse in a malt-heap. There’s no need, therefore, for any Fed conspiracy to keep a safe bank aimed at ordinary depositors from getting off the ground.

Instead TNB is exclusively meant to serve non-bank financial institutions, and money market mutual funds (MMMF) especially. Its purpose is to allow such institutions, which are not able to directly take advantage of the Fed’s policy of paying interest on excess reserves (IOER), to do so indirectly. In other words, TNB is meant to serve as a “back door” by which non-banks may gain access to the Fed’s IOER payments, with their TNB deposits serving as surrogate Fed balances, thereby allowing non-banks to realize higher returns, with less risk, than they might realize by investing directly in Treasury securities. J.P. Koning gets this (and much else) right in his own post about TNB, published while yours truly was readying this one for press:

TNB is a designed as a pure warehousing bank. It does not make loans to businesses or write mortgages. All it is designed to do is accept funds from depositors and pass these funds directly through to the Fed by redepositing them in its Fed master account. The Fed pays interest on these funds, which flow through TNB back to the original depositors, less a fee for TNB. Interestingly, TNB hasn’t bothered to get insurance from the Federal Deposit Insurance Corporation (FDIC). The premiums it would have to pay would add extra costs to its lean business model. Any depositor who understands TNB’s model wouldn’t care much anyways if the deposits are uninsured, since a deposit at the Fed is perfectly safe.

Once one realizes what TNB is about, explaining the Fed’s reluctance to grant it a Master Account becomes as easy as winking. The explanation, in a phrase, is that, were it to gain a charter, TNB could cause the Fed’s present operating system, or a substantial part of it, to unravel. Having gone to great lengths to get that system up and running, the Fed doesn’t want to see that happen. Since the present operating system is chiefly the brainchild of the Federal Reserve Board, it’s no puzzle that the Board is leading the effort to deny TNB its license.

How would TNB’s presence matter? The Fed has been paying interest on banks’ reserve balances, including their excess reserves, since October 2008. Ever since then, IOER rates have exceeded yields on many shorter-term Treasury securities — while being free from the interest-rate risk associated with holdings of longer-term securities. But banks alone (that is, “depository institutions”) are eligible for IOER. Other financial firms, including MMMFs, have had to settle for whatever they could earn on their own security holdings or for the fixed offering rate on the Fed’s Overnight Reverse Repurchase (ON-RRP) facility, which is presently 20 basis points lower than the IOER rate.

Naturally, any self-respecting MMMF would relish the opportunity to tap into the Fed’s IOER program. But how can any of them do so? Not being depository institutions, they can’t earn it directly. Nor will placing funds in an established bank work, since such a bank will only “pass through” a modest share of its IOER earnings keeping some — and probably well over 20 basis points — to cover its expenses and profits. But a bank specifically designed to cater to the MMMFs needs — now that’s a horse of a different color.

What would happen, then, if TNB, and perhaps some other firms like it, had their way? That would be the end, first of all, of the Fed’s ON-RRP facility and, therefore, of the lower limit of the Fed’s interest rate target range that that facility is designed to maintain.

Second, the Fed would face a massive increase in the real demand for excess reserve balances that would complicate both its monetary control efforts and its plan to shrink its balance sheet.

TANSTAAFL

OK, so the Fed may not like what TNB is up to. But why should the rest of us mind it? So what if the Fed’s leaky “floor-type” operating system lacks a “subfloor” to limit the extent to which the effective fed funds rate can wander below the IOER rate? Why not have the Fed pay IOER to the money funds, and to the GSEs while it’s at it, and have a leak-free floor instead? Besides, many of us have money in money funds, so that we stand to earn a little more from those funds once they can help themselves to the Fed’s interest payments. What’s not to like about that?

Plenty, actually. Consider, first of all, what the change means. The Fed would find itself playing surrogate to a large chunk of the money market fund industry: instead of investing their clients’ funds in some portfolio of Treasury securities, money market funds would leave the investing to the Fed, for a return — the IOER rate — which, instead of depending directly upon the yield on the Fed’s own asset portfolio, is chosen by Fed bureaucrats.

Now ask yourself: Just how is it that the Fed’s IOER payments could allow MMMFs to earn more than they might by investing money directly into securities themselves? Because the Fed has less overhead? Don’t make me laugh. Because Fed bureaucrats are more astute investors? I told you not to make me laugh! No, sir: it’s because the Fed can fob-off risk — like the duration risk it assumed by investing in so many longer-term securities — on third parties, meaning taxpayers, who bear it in the form of reduced Fed remittances to the Treasury. That means in turn that any gain the MMMFs would realize by having a bank that’s basically nothing but a shell operation designed to let them bank with the Fed would really amount to an implicit taxpayer subsidy. There Ain’t No Such Thing As A Free Lunch.

As it stands, of course, ordinary banks are already taking advantage of that same subsidy. But two wrongs don’t make a right. Or so my grandmother told me.

[Cross-posted from Alt-M.org]

 The Reason Foundation’s Bob Poole has published a new book, Rethinking America’s Highways: A 21st Century Vision for Better Infrastructure.

The book examines the structure of U.S. highway ownership and financing and describes why major reforms are needed. Bob has a deep understanding of both the economics and engineering of highways.

Bob puts U.S. highways in international context. He describes, for example, how Europe has more experience with private highways than we do. The photo below is the Millau Viaduct in southern France. Wiki says it is “ranked as one of the great engineering achievements of all time.” The structure includes the tallest bridge tower in the world, and it was built entirely by private money. Isn’t that beautiful? I mean both the bridge and the fact that it is private enterprise.

Bob’s book regards the institutional structure for highways, which is different that the often superficial highway discussions in D.C. Those often surround the total amount of money the government spends. But the more important issue is ensuring that we spend on projects where the returns outweigh the costs.

D.C. policymakers often focus on the jobs created by highway construction. But labor is a cost of projects, not a benefit. Instead, policymakers should focus on generating long-term net value.

Finally, spending advocates often decry potholes and deficient bridges, but the optimal amount of wear-and-tear on infrastructure is not zero, else we would spend an infinite amount.

So the challenge is to spend the right amount, and to focus it on the most needed repairs and expansions. To do that, we need to get the institutional structure right, and that is what Bob’s book is about.

Every policy wonk and politician interested in infrastructure should read Bob’s book.

 

The Independent said this of the bridge: “The viaduct, costing €400m (£278m), has been built in record time (just over three years) for a project of this size. The French construction company, Eiffage, the direct descendant of the company started by Gustav Eiffel, the builder of the celebrated tower beside the Seine, has raised the money entirely from private financing. In return, the company has been given a 75 -year concession to run the viaduct as a toll-bridge.”

Shortly after Iowa prosecutors charged illegal immigrant Christian Rivera with the murder of Molly Tibbetts in August, his Iowa employer erroneously stated that E-Verify had approved him for legal work. That later turned out to be false as his employer, Yarrabee Farms, ran his name and Social Security Number (SSN) through another system called Social Security Number Verification Service (SSNVS) that merely verified that the name and number matched, not E-Verify.  That mix-up has inspired many to argue that an E-Verify mandate for all new hires would have stopped Rivera from working and, thus, prevented the murder of Mollie Tibbetts.  That’s almost certainly not true.  New details reveal that E-Verify would likely not have prevented Rivera from working.    

E-Verify is an electronic eligibility for employment verification system run by the federal government at taxpayer expense. Created as a pilot program in 1996, E-Verify is intended to prevent the hiring of illegal immigrants by verifying the identity information they submit for employment against federal government databases in the Social Security Administration and Department of Homeland Security.  The theory behind E-Verify is that illegal immigrants won’t have the identity documents to pass E-Verify (hold your laughter) so they won’t be able to work, thus sending them all home and preventing more from coming.  That naïve theory fails when confronted with the reality of the Rivera case.

Rivera submitted the name John Budd on an out of state drivers license and an SSN that matched that name to his employer, Yarrabee Farms, when he was hired in 2014.  Yarrabee Farms ran the SSN and name John Budd through the Social Security Number Verification Service (SSNVS) to guarantee that they matched for tax purposes (Yarrabee Farms confused SSNVS with E-Verify).  SSNVS matched the name with the SSN and approved Rivera-disguised-as-Budd to work. 

E-Verify would also have matched the name with the SSN and approved Rivera for work.  The systematic design flaw in E-Verify is that it only verifies the documents that a worker hands his employers, not the worker himself.  Thus, if an illegal immigrant hands the identity documents of an American citizen to an E-Verify-using employer then it verifies the documents and the worker with the documents gets the job – just as happened here with Rivera handing Yarrabee Farms the identity of John Budd.  That’s why 54 percent of illegal immigrants run through E-Verify are approved for legal work.  E-Verify is worse than a coin toss at identifying known illegal immigrants. 

Rivera’s identity would even have gotten around the DRIVE program in Iowa because he handed his employer an out-of-state drivers license.  DRIVE is intended to link other identity information from the Iowa state’s DMV to the job applicants as an extra layer of security.  If any of that information doesn’t match the information that the applicant gives to his employer then his employer is supposed to realize the applicant is an illegal worker.  However, the flaw in DRIVE is that it only works for the state-level DMV and fails to add extra security for out-of-state drivers licenses.  Thus, Rivera’s out-of-state identity would not have been caught by DRIVE.     

Rivera is a low-skilled and poor illegal immigrant from Mexico whose English language skills are so bad that he needs an interpreter in court.  Yet he would easily have been able to fool E-Verify, a sophisticated government immigration enforcement program praised by members of Congress, the President, and the head of at least one DC think-tank, by using somebody else’s name and SSN with a driver’s license from another state. 

A law passed in 1986 has required workers in the United States to present a government identification to work legally – a requirement that has resulted in an explosion in identity theft.  Rivera likely stole Budd’s identity to get a job, an unintended consequence of that 1986 law. A national E-Verify mandate will vastly expand identity theft

As a further wrinkle, if Yarrabee Farms found any of Rivera’s identity documents or information suspicious and confronted Rivera with their suspicions concerning Rivera’s identity, his name, race, or age, then Yarrabee Farms would likely have run afoul of other labor laws and exposed itself to a serious lawsuit.  The federal government expects employers to enforce immigration laws but not to the point that they can profile applicants.  The safe choice is not to profile anyone and hire those who present documents so long as they are not obviously fake.

The last wrinkle is that many businesses don’t comply with E-Verify in states where it is mandated.  In the second quarter of 2017, only 59 percent of new hires in Arizona were run through E-Verify even though the law mandates that 100 percent be run through.  Arizona has the harshest state-level immigration enforcement laws in the country and they can’t even guarantee compliance with E-Verify.  There is even evidence that Arizona’s E-Verify mandate temporarily increased property crime committed by a subpopulation that is more likely to be illegally present in the United States, prior to that population learning that E-Verify is easy to fool.  South Carolina, the state with the best-reputed enforcement of E-Verify, only had 55 percent compliance in the same quarter of 2017.  The notion that a lackluster Washington will do better than Arizona or South Carolina is too unserious a charge to rebut. 

Since SSNVS matched the name John Budd with a valid SSN and Rivera used an out-of-state drivers license, E-Verify would not have caught him.  E-Verify is a lemon of a system that is not a silver bullet to stop illegal immigration.  It wouldn’t have stopped Rivera from working legally in Iowa.  E-Verify’s cheerleaders should stop using the tragic murder of Mollie Tibbetts as a sales pitch for their failed government program.

 

Cato released my study today on “Tax Reform and Interstate Migration.”

The 2017 federal tax law increased the tax pain of living in a high-tax state for millions of people. Will the law induce those folks to flee to lower-tax states?

To find clues, the study looks at recent IRS data and reviews academic studies on interstate migration.

For each state, the study calculated the ratio of domestic in-migration to out-migration for 2016. States losing population have ratios of less than 1.0. States gaining population have ratios of more than 1.0. New York’s ratio is 0.65, meaning for every 100 residents that left, only 65 moved in. Florida’s ratio is 1.45, meaning that 145 households moved in for every 100 that left.

Figure 1 maps the ratios. People are generally moving out of the Northeast and Midwest to the South and West, but they are also leaving California, on net.

People move between states for many reasons, including climate, housing costs, and job opportunities. But when you look at the detailed patterns of movement, it is clear that taxes also play a role.

I divided the country into the 25 highest-tax and 25 lowest-tax states by a measure of household taxes. In 2016, almost 600,000 people moved, on net, from the former to the latter.

People are moving into low-tax New Hampshire and out of Massachusetts. Into low-tax South Dakota and out of its neighbors. Into low-tax Tennessee and out of Kentucky. And into low-tax Florida from New York, Connecticut, New Jersey, and just about every other high-tax state.

On the West Coast, California is a high-tax state, while Oregon and Washington fall just on the side of the lower-tax states.

Of the 25 highest-tax states, 24 of them had net out-migration in 2016.

Of the 25 lowest-tax states, 17 had net in-migration.  

 

https://object.cato.org/sites/cato.org/files/pubs/pdf/tbb-84-revised.pdf

A new report from the American Public Transportation Association (APTA) comes out firmly in support of the belief that correlation proves causation. The report observes that traffic fatality rates are lower in urban areas with high rates of transit ridership, and claims that this proves “that modest increases in public transit mode share can provide disproportionally larger traffic safety benefits.”


Here is one of the charts that APTA claims proves that modest increases in transit ridership will reduce traffic fatalities. Note that, in urban areas with fewer than 25 annual transit trips per capita – which is the vast majority of them – the relationship between transit and traffic fatalities is virtually nil. You can click the image for a larger view or go to APTA’s document from which this chart was taken.

In fact, APTA’s data show no such thing. New York has the nation’s highest per capita transit ridership and a low traffic fatality rate. But there are urban areas with very low ridership rates that had even lower fatality rates in 2012, while there are other urban areas with fairly high ridership rates that also had high fatality rates. APTA claims the correlation between transit and traffic fatalities is a high 0.71 (where 1.0 is a perfect correlation), but that’s only when you include New York and a few other large urban areas: among urban areas of 2 million people or less, APTA admits the correlation is a low 0.28.

The United States has two kinds of urban areas: New York and everything else. Including New York in any analysis of urban areas will always bias any statistical correlations in ways that have no application to other urban areas.

In most urban areas outside of New York, transit ridership is so low that it has no real impact on urban travel. Among major urban areas other than New York, APTA’s data show 2012 ridership ranging from 55 trips per person per year in Los Angeles to 105 in Washington DC to 133 in San Francisco-Oakland. From the 2012 National Transit Database, transit passenger miles per capita ranged from 287 in Los Angeles to 544 in Washington to 817 in San Francisco.

Since these urban areas typically see around 14,000 passenger miles of per capita travel on highways and streets per year, the 530-mile difference in transit usage between Los Angeles and San Francisco is pretty much irrelevant. Thus, even if there is a weak correlation between transit ridership and traffic fatalities, transit isn’t the cause of that correlation.

San Francisco and Washington actually saw slightly more per capita driving than Los Angeles in 2012, yet APTA says they had significantly lower fatality rates (3.7 fatalities per 100,000 residents in San Francisco and 3.6 in Washington vs. 6.4 in Los Angeles). Clearly, some other factor must be influencing both transit ridership and traffic fatalities.

With transit ridership declining almost everywhere, this is just a desperate attempt by APTA to make transit appear more relevant than it really is. In reality, contrary to APTA’s unsupported conclusion, modest rates in transit ridership will have zero measurable effect on traffic fatality rates.

Content moderation remains in the news following President Trump’s accusation that Google manipulated its searches to harm conservatives. Yesterday Congress held two hearings on content moderation, one mostly about foreign influence and the other mostly about political bias. The Justice Department also announced Attorney General Sessions will meet soon with state attorneys general “to discuss a growing concern that these companies may be hurting competition and intentionally stifling the free exchange of ideas on their platforms.” 

None of this is welcome news. The First Amendment sharply limits government power over speech. It does not limit private governance of speech. The Cato Institute is free to select speakers and topics for our “platform.” The tech companies have that right also even if they are politically biased. Government officials should also support a culture of free speech. Government officials bullying private companies contravenes a culture of free speech. Needless to say, having the Justice Department investigate those companies looks a lot like a threat to the companies’ freedom. 

So much for law and theory. Here I want to offer some Madisonian thoughts on these issues. No one can doubt James Madison’s liberalism. But he wanted limited government in fact as well as in theory. Madison thought about politics to realize liberal ideals. We should too. 

Let’s begin with the question of bias. The evidence for bias against conservatives is anecdotal and episodic. The tech companies deny any political bias, and their incentives raise doubts about partisan censorship. Why take the chance you might drive away millions of customers and invite the wrath of Congress and the executive branch on your business? Are the leaders of these companies really such political fanatics that they would run such risks? 

Yet these questions miss an important point. The problem of content moderation bias is not really a question of truth or falsity. It is rather a difficult political problem with roots in both passion and reason. 

Now, as in the past, politicians have powerful reasons to foster fear and anger among voters. People who are afraid and angry are more likely to vote for a party or a person who promises to remedy an injustice or protect the innocent. And fear and anger are always about someone threatening vital values. For a Republican president, a perfect “someone” might be tech companies who seem to be filled with Progressives and in control of the most important public forums in the nation. 

But the content moderation puzzle is not just about the passions. The fears of the right (and to a lesser degree, the left) are reasonable. To see this, consider the following alternative world. Imagine the staff of the Heritage Foundation has gained potential control over much of the online news people see and what they might say to others about politics. Imagine also that after a while Progressives start to complain that the Heritage folks are removing their content or manipulating new feeds. The leaders of Heritage deny the charges. Would you believe them? 

Logically it is true that this “appearance of bias” is not the same as bias, and bias may be a vice but cannot be a crime for private managers. But politically that may not matter much, and politics may yet determine the fate of free speech in the online era. 

Companies like Google have to somehow foster legitimacy for their moderation of content, moderation that cannot be avoided if they are to maximize shareholder value. They have to convince most people that they have a right to govern their platforms even when their decisions seem wrong. 

Perhaps recognizing that some have reasonable as well as unreasonable doubts about their legitimacy would be a positive step forward. And people who harbor those reasonable doubts should keep in mind the malign incentives of politicians who benefit from fostering fear and anger against big companies. 

If the tech companies fail to gain legitimacy, we all will have a problem worse than bias. Politicians might act, theory and law notwithstanding. The First Amendment might well stop them. But we all would be better off with numerous, legitimate private governors of speech on the internet. Google’s problem is ours.

In Supreme Court nominee Brett Kavanaugh’s opening statement at his hearing Tuesday, he praised Merrick Garland, with whom he serves on the D.C. Circuit, as “our superb chief judge.”

If you were surprised by that, you shouldn’t have been. When President Obama nominated Garland to the high court, Judge Kavanaugh described his colleague as “supremely qualified by the objective characteristics of experience, temperament, writing ability, scholarly ability for the Supreme Court … He has been a role model to me in how he goes about his job.”

In fact, it has been reported in at least one place that one reason Kavanaugh was left off Trump’s initial list of SCOTUS nominees was that he had been so vocal and public in praising Garland’s nomination.

Now, it would be understandable if neither side in the partisan confirmation wars chose to emphasize this bit of background to the story. Republican strategists might not be keen on reminding listeners of what their party did with Garland’s nomination, and might also worry about eroding enthusiasm for Kavanaugh among certain elements of their base. Democratic strategists, meanwhile, might see the episode as one in which the present nominee comes off as not-a-monster, and, well, you can’t have that.

The lesson, if there is one, might be that the federal courts are not as polarized and tribal as much of the higher political class and punditry at nomination time.

The Italian general elections of March 4, 2018 have produced an improbable coalition government between two upstart populist parties: left-Eurosceptic-nationalist Movimento 5 Stelle (Five Star Movement) and the right-Eurosceptic-nationalist Lega (League). The coalition partners agree on greater public spending and, at the same time, on tax cuts that would reduce revenue. How then to pay for the additional spending? Italy is already highly indebted. Its public debt stands at 133 percent of GDP, highest in the Eurozone apart from Greece, and well above the EU’s average of 87 percent. Its sovereign bonds carry a high default risk premium. Today, the yield on Italian 10-year bonds stands at 291 basis points above the yield on 10-year German bunds, up from a spread in the 130-40 range during the months before the election.

If tax revenue and debt cannot practically be increased, the remaining fiscal option—for a country with its own fiat currency—is printing base money. But Italy is part of the Eurozone, and only the ECB can create base-money euros. A group of four Italian economists (Biagio Bossone, Marco Cattaneo, Massimo Costa, and Stefano Sylos Labini), correctly noting that “budget constraints and a lack of monetary sovereignty have tied policymakers’ hands,” and regarding this as a bad thing, have proposed in a series of publications that Italy should introduce a new domestic quasi-money, a kind of parallel currency that they call “fiscal money.” Similar proposals have been made by Yanis Varoufakis, the former Greek finance minister, and by Joseph Stiglitz, the prominent American economist. Italy’s coalition government is reportedly considering these proposals seriously.

Under the Bossone et al. proposal, the Italian government would issue euro-denominated bearer “tax rebate certificates” (TRCs). The government would pledge to accept these at face value in “future payments to the state (taxes, duties, social contributions, and so forth).” The certificates in that sense would be “redeemable at a later date – say, two years after issuance.” If non-interest-bearing, they would trade at a discounted value. But if interest were paid to keep the certificates always at par, and the payment system accordingly accepted them as the equivalent of base-money euros, the certificates would be additional spendable money in the public’s hands. “As a result,” they argue, “Italy’s output gap — that is, the difference between potential and actual GDP — would close.” Thus they claim that “properly designed, such a system could substantially boost economic output and public revenues at little to no cost.”

Remarkable claims. Bossone et al. have recently argued that their “fiscal money” program would not violate ECB rules. But there is a more basic question: would it actually work to boost real GDP sustainably by shrinking unemployment and excess capacity? On critical examination, the answer is no. The proposal is based on wishful thinking.

To provide empirical context, note that estimated slack in the Italian economy is already shrinking. The OECD estimate of Italy’s output gap (the percentage by which real GDP falls short of estimated full-employment or “potential” GDP) was large—greater than 5 percent—for 2014, the year when Bossone et al. first floated their proposal. Among the major Eurozone economies, only Greece, Spain, and Portugal had larger gaps; France had a gap half as large, while Germany was above its estimated potential GDP. For 2018, however, Italy’s estimated output gap is under 0.5 percent. For 2019 the OECD projects that actual GDP will exceed full-employment GDP.

Theoretically (as famously argued by Leland Yeager and by Robert Clower and Axel Leijonhufvud), in a world of sticky prices and wages a depressed level of real output can be due to an unsatisfied excess demand for money, which logically corresponds to an aggregate excess supply (unsold inventories) of other goods including labor. People building up their real money balances will do so by buying fewer goods at current prices and offering more labor at current wages. But is that the cause depressed output in Italy today? Yeager’s “cash-balance interpretation of depression” assumes an economy with its own money, domestically fixed in quantity, so that an excess demand for money can be satisfied only by a drawn-out process of falling prices and wages that raises real balances.

But Italy today does not have its own money. It is a part of a much larger monetary area, the Eurozone. (For one indication of Italy’s share of the euro economy, Italian banks hold 14.7% of euro deposits.) The European Central Bank through tight monetary policy can create an excess demand for money in the entire Eurozone, in which case Italy suffers equally with other Eurozone countries, but it cannot create an excess demand for money specifically in Italy. A specifically Italian excess demand for money can arise if Italians increase their demand for money balances relative to other Eurozone residents, but in that case euros can and will flow in from the rest of the Eurozone (corresponding to Italians more eagerly selling goods or borrowing) to satisfy that demand.

Because Italy’s small output gap in 2018 therefore cannot be plausibly attributed to an unsatisfied excess demand for money, an expansion of the domestic money stock through the creation of “fiscal money” is not an appropriate remedy.

If not due to an excess demand for money, what is the cause of Italy’s lingering output gap? I don’t know, but I would look for real factors. Likely candidates are labor-market inflexibility in the face of real shocks, and the reluctance of investors to put financial or real capital into a country with serious fiscal problems (hence a serious risk of new taxes or higher tax rates soon) and a non-negligible threat of leaving the euro.

The flip side of flowing into Italy from the rest of the Eurozone, to satisfy Italian money demand, is that any excess money in Italy will flow out.  If Italians already hold the quantity of euro balances they desire, then the creation of “fiscal money” would not increase Italy’s money stock except transitorily. Supposing that Italians treat new “fiscal money” as the domestic equivalent of euros, the addition to their money balances would result in holdings greater than desired at current euro prices and interest rates. In restoring their desired portfolio shares (spending off excess balances) they would send euros abroad (by assumption, the domestic quasi-money would not be accepted abroad) in purchases of imported goods and financial assets.

It isn’t clear, however, that the public would actually regard “fiscal money” as the equivalent of base-money euros added to the circulation. Unlike fiat base money, TRCs are not a net asset. They come with corresponding debts, the government’s obligation to accept them in lieu of euros for taxes (say) two years after issue. There is no reason for taxpayers to think themselves richer for having more TRCs in their wallets given that they will need to pay future taxes (equivalent in present value) to service and retire them.

Despite $8.6 billion spent on the eradication of opium in Afghanistan over the past seventeen years, the US military has failed to stem the flow of Taliban revenue from  the illicit drug trade. Afghanistan produces the majority of the world’s opium, and recent U.S. military escalations have failed to alter the situation. According to a recent piece in the Wall Street Journal:

“Nine months of targeted airstrikes on opium production sites across Afghanistan have failed to put a significant dent in the illegal drug trade that provides the Taliban with hundreds of millions of dollars, according to figures provided by the U.S. military.”

This foreign war on drugs has been no more successful than its domestic counterpart. If U.S. military might cannot suppress the underground market, local police forces have no hope.  Supply side repression does not seem to work, and its costs and unintended consequences are large.

 Research assistant Erin Partin contributed to this blog post.

In 1985, Reason Foundation co-founder and then-president Robert Poole heard about a variable road pricing experiment in Hong Kong. In 1986, he learned that France and other European countries were offering private concessions to build toll roads. In 1987, he interviewed officials of Amtech, which had just invented electronic transponders that could be used for road tolling. He put these three ideas together in a pioneering 1988 paper suggesting that Los Angeles, the city with the worst congestion in America, could solve its traffic problems by adding private, variable-priced toll lanes to existing freeways.

Although Poole’s proposal has since been carried out successfully on a few freeways in southern California and elsewhere, it is nowhere near as ubiquitous as it ought to be given that thirty years have passed and congestion is worse today in dozens of urban areas than it was in Los Angeles in 1988. So Poole has written Rethinking America’s Highways, a 320-page review of his research on the subject since that time. Poole will speak about his book at a livestreamed Cato event this Friday at noon, eastern time.

Because Poole has influenced my thinking in many ways (and, to a very small degree, the reverse is true), many of the concepts in the book will be familiar to readers of Gridlock or some of my Cato policy analyses. For example, Poole describes elevated highways such as the Lee Roy Selmon Expressway in Tampa as a way private concessionaires could add capacity to existing roads. He also looks at the state of autonomous vehicles and their potential contributions to congestion reduction.

France’s Millau Viaduct, by many measures the largest bridge in the world, was built entirely with private money at no risk to French taxpayers. The stunning beauty, size, and price of the bridge are an inspiration to supporters of public-private partnerships everywhere.

Beyond these details, Poole is primarily concerned with fixing congestion and rebuilding the nation’s aging Interstate Highway System. His “New Vision for U.S. Highways,” the subject of the book’s longest chapters, is that congested roads should be tolled and new construction and reconstruction should be done by private concessionaires, not  public agencies. The book’s cover shows France’s Millau Viaduct, which a private concessioner opened in 2004 at a cost of more than $400 million. Poole compares the differences between demand-risk and availability-payment partnerships – in the former, the private partner takes the risk and earns any profits; in the latter, the public takes the risk and the private partner is guaranteed a profit – coming down on the side of the former.

This chart showing throughput on a freeway lane is based on the same data as a chart on page 256 of Rethinking America’s Highways. It suggests that, by keeping speeds from falling below 50 mph, variable-priced tolling can greatly increase throughput during rush hours.

The tolling chapter answers arguments against tolling, responses Poole has no doubt made so many times he is tired of giving them. He mentions (but doesn’t emphasize enough, in my opinion) that variable pricing can keep traffic moving at 2,000 to 2,500 vehicles per hour per freeway lane, while throughout can slow to as few as 500 vehicles per hour in congestion. This is the most important and unanswerable argument for tolling, for – contrary to those who say that tolling will keep poor people off the roads – it means that tolling will allow more, not fewer, people to use roads during rush hours.

While I agree with Poole that private partners would be more efficient at building new capacity than public agencies, I don’t think this idea is as important as tolling. County toll road authorities in Texas, such as the Fort Bend Toll Road Authority, have been very efficient at building new highways that are fully financed by tolls.

Despite considerable (and uninformed) opposition to tolling, an unusual coalition of environmentalists and fiscal conservatives has persuaded the Oregon Transportation Commission to begin tolling Portland freeways. As a result, Portland may become the first city in America to toll all its freeways during rush-hour, a goal that would be thwarted if conservatives insisted on private toll concessions.

Tolling can end congestion, but Poole points out that this isn’t the only problem we face: the Interstate Highway System is at the end of its 50-year expected lifespan and some method will be needed to rebuild it. He places his faith in public-private partnerships for such reconstruction.

Tolling and public-private partnerships are two different questions, but of the two only tolling (or mileage-based user fees, which uses the same technology to effectively toll all roads) is essential to eliminating congestion. It is also the best alternative to what Poole argues are increasingly obsolescent gas taxes. Anyone who talks about congestion relief without including road pricing isn’t serious about solving the problem. Poole’s book should be required reading for all politicians and policymakers who deal with transportation.

Pages