The American Prospect

Thumbnail: 
Media of Type text

Justice Sotomayor's Powerful Defense of Equality

Yesterday, the Supreme Court upheld a provision of Michigan's constitution that bans the state or any of its subdivisions from "grant[ing] preferential treatment to any individual or group on the basis of race, sex, color, ethnicity, or national origin in the operation of public employment, public education, or public contracting." The Court was fractured; the six justices who voted to uphold the amendment did so for three independent reasons. Written by Justice Anthony Kennedy, the majority decision—to which Chief Justice John Roberts and Associate Justice Samuel Alito signed on—was narrow: It upheld the amendment without disturbing any precedent. Far more interesting was Justice Sonia Sotomayor's dissent, which makes a strong case for a robust interpretation of the equal-protection clause of the 14th Amendment and represents perhaps her most compelling work in her tenure on the Court so far.

The case for upholding Michigan's amendment, which was adopted through the ballot-initiative process, seems compelling at first glance. Even if one agrees that affirmative-action programs are generally constitutional, it surely cannot be the case that the Constitution requires states or the federal government to adopt affirmative-action policies. Had Michigan never adopted affirmative-action policies or had the legislature repealed them, this would presumably not raise a serious constitutional question. So why wouldn't the citizens of Michigan be able to make the same policy choice? "There is no authority in the Constitution of the United States or in this Court’s precedents," Kennedy asserts in the majority opinion, "for the Judiciary to set aside Michigan laws that commit this policy determination to the voters."

In the most relevant precedent, the Court ruled in 1976 that a Washington constitutional amendment that banned the use of bussing to integrate schools violated the 14th Amendment because it "impose[d] substantial and unique burdens on racial minorities." Joined by Justice Ruth Bader Ginsburg, Justice Sotomayor makes a powerful argument that this and related precedents require the Court to strike down the Michigan initiative.

The core of the Court's "political-process" precedents, Sotomayor observes, is that minorities have access to the state's democratic procedures. The Constitution "does not guarantee minority groups victory in the political process," but it does "guarantee them meaningful and equal access to that process. It guarantees that the majority may not win by stacking the political process against minority groups permanently, forcing the minority alone to surmount unique obstacles in pursuit of its goals—here, educational diversity that cannot reasonably be accomplished through race-neutral measures." Reallocating power in the way Michigan does here therefore raises serious equal-protection concerns.

Sotomayor's dissent also cites a landmark Kennedy opinion: Romer v. Evans, in which the Court struck down a Colorado initiative forbidding the recognition of sexual orientation as a protected category under existing civil-rights laws. Sotomayor observes that Romer "resonates with the principles undergirding the political-process doctrine." The Court forbade Colorado from preventing a disadvantaged majority access to the state and local political processes, even though states are not constitutionally required to pass civil-rights laws.

Sotomayor's dissent also offers a useful defense of the political-process doctrine and its strong roots in the 14th Amendment. Starting with the famous fourth footnote of Carolene Products in 1938, the Court has held that state actions that burden minorities should be subject to particular judicial scrutiny. When burdens are placed on minorities that affect access to the political process, the possibility of discrimination is particularly acute, allowing exclusionary politics to become self-perpetuating.

It is instructive that in their concurrence Justices Antonin Scalia and Clarence Thomas mock the influence of Carolene Products: "We should not design our jurisprudence to conform to dictum in a footnote in a four-Justice opinion." This is grimly ironic, given that Justice Scalia and Justice Thomas recently joined an opinion gutting the Voting Rights Act based on highly implausible bare assertions made by dicta in an opinion written by Chief Justice Roberts less than five years ago. With respect to Carolene Products, conversely, what matters is not merely the footnote in one opinion but the fact that it conforms to the 14th Amendment, and was elaborated on in many subsequent cases. Several of these precedents were the political-process rulings that were supposed to control the outcome in yesterday's case. As both Scalia from the right and Sotomayor from the left argue, it's hard to deny that these precedents have been silently overruled, even if the plurality says otherwise.

The consequences of Michigan's constitutional amendment illustrate the ongoing relevance of the Court's equal-protection precedents. As the dissenters point out, the percentage of African-American students getting degrees from the University of Michigan was the lowest since 1991 after the amendment passed. In addition, the percentage of racial minorities in freshman classes at Michigan's flagship university has steadily declined—even as racial minorities comprise an increasing percentage of the state's population. This does not in itself prove that the Court was wrong to uphold it, but it does show that the elimination of affirmative action is unwise, and at a minimum the Supreme Court should show deference to elected decision-makers who determine that it is necessary.

Original Story
Media of Type text

Today In American Exceptionalism

We're going to talk about rich people and government spending, but first, some context. At some point you may have wondered about parliamentary systems like they have in Great Britain, in which the party that gets the most seats in the legislature also installs its leader as chief executive. With complete control over government, why don't they go hog-wild and completely remake the entire country after every election? The simple answer is that they know they'll have to stand for another election before long. But the other key factor is that a transition from, say, Labour to the Conservatives isn't as jarring as a transition of total control from our Democrats to Republicans might be, because there isn't as much distance between the parties. In many of our peer countries in Europe and elsewhere, some things we fight bitterly over have basically been settled. For instance, everyone in the U.K. accepts that the National Health Service is a good thing, even if there might be some disagreements about how to keep it healthy.

We shouldn't overstate that (politics gets bitter almost everywhere, and there's always plenty to argue about), but America as a land of unusually large divides on fundamental political questions helps prepare us for this extraordinary graph, produced by Larry Bartels based on a survey taken before the Great Recession in 33 countries:

From our vantage point, the idea of a society without dramatic differences in opinion as a function of class seems incredible; isn't that just how things are? Apparently not—or at least, only here is the link between class and opinions about cutting government spending such a clear one. Here's Bartels' explanation of what produces this difference:

What accounts for the remarkable enthusiasm for government budget-cutting among affluent Americans? Presumably not the sheer magnitude of redistribution in the United States, which is modest by world standards. And presumably not a traditional aversion to government in American political culture, since less affluent Americans are exposed to the same political culture as those who are more prosperous. A more likely suspect is the entanglement of class and race in America, which magnifies aversion to redistribution among many affluent white Americans. Another is the “hidden” nature of the American welfare state, which funnels subsidies to affluent people indirectly through tax breaks on mortgages and health insurance rather than providing them with public housing and free clinics.

The U.S. tax system is also quite different from most affluent countries’ in its heavy reliance on progressive income taxes. The political implications of this difference are magnified by the remarkable salience of income taxes in Americans’ thinking about taxes and government.

I'd add that in recent years, Republicans have worked hard to generate anger among those with healthy incomes not so much at poor minorities (though they're happy to exploit that prejudice wherever it can be found), but at all poor people. I don't know enough about the politics of France or Sweden or Japan to say if their conservative radio hosts read from Atlas Shrugged on the air and their elites sneer at the shabby morals of the 47 percent, but I haven't heard much to suggest that's the case.

In addition, our conversation about taxes is as Bartels says, dominated by income taxes, and more specifically, federal income taxes, which are the most progressive part of the whole tax system. When rich people talk about that 47 percent who allegedly don't pay taxes, they're forgetting about sales taxes and property taxes, among other things, because all that's in their mind is federal income taxes. In contrast, in most other advanced countries they rely heavily on value added taxes (VATs), which function somewhat like hefty national sales taxes, for a good portion of their revenue (in some countries VATs make up as much as a quarter of total revenue). Those are paid by everyone, and so that may diffuse the kind of class effect we see in Bartels' graph. Keep in mind that the American wealthy's thirst for cuts in government spending is driven by both sides of the ledger: they're contemptuous of low-class "takers," and part of their contempt comes from the fact that they believe those moochers pay no taxes; and they're resentful about the taxes they have to pay.

When we think about those European countries, we may consider their comprehensive welfare states, but most Americans probably don't realize that most have tax systems that are much less progressive than ours, in large part due to VATs. But they also return a lot of that money back to people at lower incomes via those comprehensive welfare states, which produce societies that have much lower levels of inequality once that's all taken into account (Dylan Matthews explained that here).

This raises an interesting question for both liberals and conservatives in America: if you could wave a wand and change our system to one like those—a less progressive tax system (meaning higher taxes for people of lower incomes and lower taxes for people of higher incomes), combined with a much more comprehensive welfare state (better unemployment benefits, free health care, free day care, etc.), would you do it?

It would be one heck of a grand bargain. I suspect liberals would be more favorably inclined than conservatives, as much as conservatives dislike the progressivity of federal income taxes. Liberals would accept a tax system they saw as less fair, if what the system funded did much more to ease ordinary people's burdens and produced a more equal society. Conservatives, on the other hand, might like to complain about their high taxes, but would find the idea of a more robust welfare state absolutely horrifying.

Original Story
Media of Type text

Future of Television at Stake at Supreme Court

Today, the Supreme Court is hearing arguments in ABC vs Aereo, a case that will (cue drumroll) decide the future of television. Or maybe it won't, but it's a fascinating case, involving the intersection of technology with political and market power. There's a comprehensive explanation here, but the short version is that Aereo is a service that allows you to get broadcast TV, i.e. the major networks and a few others that send signals over the air, through an internet connection instead of a set of rabbit ears on top of your TV. The broadcast networks and the big cable companies want to shut it down, because they'd both rather have everyone getting the signals through cable. You see, your cable company pays a license fee to ABC, NBC, CBS, and every other network, fees that amount to billions of dollars a year (and get passed on to you). Someone who uses Aereo to cut the cable cord isn't paying those license fees, and isn't paying for a cable subscription either.  

Aereo is, without question, a potentially disruptive technology that threatens one of the networks' profit streams. The question is whether the Supreme Court should step in to protect those profits.

The case hinges on the definition of a "public performance." According to a law written during the early days of cable television, if you hold a public performance of a broadcast television program—like a bar showing a playoff game—you have to pay a licensing fee to the broadcaster. If Aereo were just pulling the broadcast signals out of the air and retransmitting them over the web to their subscribers, there would be no question that they would be doing something similar.

But what Aereo came up with is a technical means of making their transmissions private. If you subscribe to Aereo, you have your own antenna at their facilities. It's a tiny little one sitting alongside thousands of others, but it's getting the signal from the air. The company argues that it's no different from that set of rabbit ears on top of your television; your personal antenna just happens to be located not at your house but at Aereo. You also get a cloud-based DVR, allowing you to watch programs whenever you want.

The broadcasters think that's a sneaky way to get around the copyright law. And it is. But that doesn't make it illegal.

It seems to me that the legal question is pretty straightforward. The courts already said, in a separate 2008 case, that cloud-based DVRs don't violate copyright. And if Aereo gives me my own antenna, they're not mounting a "public performance" as defined in the law. Keep in mind also that the networks still make money from Aereo subscribers, because they're watching the advertising the networks sell.

And yes, this encourages people to cut the cord. But businesses have their profit models upended by new technologies all the time, and "This means we'll make less money!" isn't exactly an argument that seals your case. But with this Court, all that doesn't tell us what the outcome will be. Because what we have here is a conflict between a bunch of hugely wealthy and powerful corporations on one side, and a scrappy startup on the other. It's no surprise that the networks' case is being argued by Paul Clement, the GOP superlawyer who is such a fixture at the Supreme Court that he is sometimes referred to as the "10th justice."

That power imbalance doesn't bode well for Aereo, regardless of the merits of their case. In 2009, Jeffrey Toobin of The New Yorker wrote that the key to understanding the Roberts Court is power, because those who have it are likely to prevail. "The kind of humility that Roberts favors reflects a view that the Court should almost always defer to the existing power relationships in society. In every major case since he became the nation’s seventeenth Chief Justice, Roberts has sided with the prosecution over the defendant, the state over the condemned, the executive branch over the legislative, and the corporate defendant over the individual plaintiff. Even more than Scalia, who has embodied judicial conservatism during a generation of service on the Supreme Court, Roberts has served the interests, and reflected the values, of the contemporary Republican Party."

That's no guarantee of anything; every once in a while, the Court's usual ideological lines disappear. And there isn't a clear partisan divide on this question, even if conservatives are inclined to favor big corporations over a company that puts power in the hands of consumers. But if Aereo wins, you can bet that the cable and broadcast companies' lobbyists will get right to work on a new law putting them out of business, ready to be inserted into a must-pass omnibus bill at the next opportunity.

Original Story
Media of Type text

Too Big to Fail. Not Too Strong

From Andrew Mellon’s nearly 11 years as Treasury secretary under Presidents Warren G. Harding, Calvin Coolidge, and Herbert Hoover to our time, when Timothy Geithner went from financial regulator at the New York Federal Reserve to Treasury secretary to investment executive, journalists have often employed the image of a revolving door to describe the flow of bankers between Wall Street and the U.S. Department of the Treasury. But few know that the White House and the Treasury are, arguably, a single building. A tunnel connects 1500 Pennsylvania Avenue with 1600. Presidents use this passageway to slip visitors in and out of the Oval Office.

Nomi Prins, in her new history All the Presidents’ Bankers, does not say it in so many words. But she shows that the tunnel from the White House to the Treasury extends, metaphorically, for 226 miles to Lower Manhattan. Prins digs into presidential libraries and national archives and mines a shelf of books. She also knows Wall Street from the inside. Equipped with degrees in mathematics and statistics, she worked at Chase, Lehman Brothers, Bear Stearns, and finally Goldman Sachs, where she was a managing director. Prins created something of a sensation eight years ago with one of her earlier books, Other People’s Money: The Corporate Mugging of America.

Prins’s story begins in the ’80s—the 1880s. Science, engineering, and mass production were transforming what had been an agricultural nation into an industrial one, creating private fortunes that would have been beyond imagining before the Civil War. Rivers of cash flowed to titans of the Gilded Age.

The rise of the Rockefeller fortune was crucial. The bounteous profits that John D. Rockefeller, co-founder of Standard Oil, generated through business acumen and suppressing competition grew larger than could be put to use in his own enterprise. So Rockefeller “began investing in banks, insurance companies, copper, steel, railroads and public utilities.” Rockefeller did all this, Prins shows, with the help of James Stillman, president of National City Bank (the forerunner of what is now Citigroup), while drawing on investments made through Chase National Bank, known today as JPMorgan Chase. Prins writes of marriages of interests—not European royals sealing diplomatic ties but banking royalty cementing business alliances. She recounts, for example, the 1902 marriage of James Stillman’s daughter Elsie to a Rockefeller. That merger produced James Stillman Rockefeller, who went on to become National City Bank’s president and later chairman in the 1950s and 1960s. The intertwined interests among heirs to the early generations of robber barons have proved to be as strong as a spider’s web, able to trap huge sums and sustain fortunes through good times and bad.

Prins notes that the bankers who mattered a century and more ago and the bankers who matter today have consistently numbered six, a group small enough to cooperate or collude during a crisis. On the overcast Thursday morning of October 24, 1929, the day of the crash that launched the Great Depression, Charles Mitchell of National City Bank, Al Wiggin of Chase, Seward Prosser of Bankers Trust (later bought by Deutsche Bank), and William Potter of Guaranty Trust (now part of JPMorgan) strolled to 23 Wall Street to meet with Thomas Lamont of J.P. Morgan and George Baker of First National Bank (later Citi), who had come in through a side door to avoid reporters. In 20 minutes, the six agreed to prop up the collapsing market; Lamont, standing in for J.P. Morgan Jr., who was in Europe, announced the plan.

In the 2008 banking collapse near the end of the George W. Bush administration, it was again the Big Six banks that set the terms. They met on September 12 and soon wrested from Congress more cash than the Defense Department spent that year. The banks’ champion was Bush’s Treasury secretary, Henry Paulson, who before serving in that office had run Goldman Sachs, the country’s biggest investment bank.

Even those who have read Secrets of the Temple, William Greider’s massive and brilliant 1987 exposé of the Federal Reserve, will find Prins’s book worth their time. She presents a new narrative, one that shows how the changing cast of six has shaped America’s fortunes under presidents in both parties. President Harry Truman’s four-point plan to restore the world economy, for example, created new opportunities for American banks. In 1950, Truman chose young Nelson Rockefeller, a nephew of Chase Bank chairman Winthrop Aldrich, to chair an international development advisory board just as Chase Bank’s international operations blossomed. Prins details how another man—John McCloy, chairman of Chase for seven of the Eisenhower years—tied together interests of big banks, oil, law, and diplomacy. McCloy was a top adviser to Secretary of War Henry Stimson, helping create what is now the CIA; he ran the World Bank after World War II, later served on the Warren Commission, and became perhaps the most powerful partner at the law firm hired by Rockefellers and Kennedys alike—Milbank Tweed—representing the “seven sisters” oil companies in Washington and Riyadh and their bankers. From Franklin Roosevelt to Ronald Reagan, he enjoyed remarkable White House access.

As one of the “six wise men” of American foreign policy early in the Cold War, McCloy signaled to the Eisenhower administration that it must not take too literally the powers of the Bank Holding Company Act of 1956. The law required bank-holding companies to sell nonbank investments and limited each bank to one state. In theory, Prins writes, the act was supposed to check expansion of the biggest banks by requiring Fed approval. But the Fed, by far the friendliest of the bank regulatory agencies that include the Comptroller of the Currency and the Federal Deposit Insurance Corporation, applied the law in ways that helped Wall Street banks, which grew through acquisitions, while thwarting an upstart rival then based in San Francisco: Bank of America. Clever bankers, Prins observes, made the Fed into more ally than regulator (a tradition that continued decades later, during Geithner’s tenure as president of the New York Fed from 2003 to 2009). The regulations and policies the bankers persuaded the Fed to adopt in the 1950s effectively defeated measures designed to rein in their power—especially antitrust laws that would have forced competition. In practice, it was through the very mergers the law was supposed to corral that Chase Bank grew and grew.

No matter who has been in the White House, Wall Street banks have had access, though often in ways not obvious to the average bank customer. Big bankers exploited every opportunity, including the pending impeachment of President Bill Clinton. Prins writes that as the body politic argued over a husband lying about how a blue dress got stained, “bankers welcomed the media and political frenzy as a distraction while they focused on their own houses.” Early in 1998, Sandy Weill of Travelers Insurance proposed a corporate marriage to Citibank, headed by John Reed. The creation of Citigroup just six weeks later represented a landmark violation of the Glass-Steagall Act, a New Deal law that had required retail banking, investment banking, and insurance to be performed by unconnected corporations.

In the 35 years from 1960 to 1995, fewer than 10,000 bank mergers took place. In the last half of the 1990s, when Clinton was occupied with impeachment, more than 11,000 bank mergers were consummated. Lobbyists ran up invoices, corporations and executives poured in campaign contributions, and most journalists accepted the meme that Glass-Steagall was outdated. The loophole that allowed Travelers and Citi to merge required the killing of Glass-Steagall by 1999. The inherent conflicts of interest Glass-Steagall had prevented, which few lawmakers or journalists understood, were now enabled. This in turn, Prins shows, fueled the unsound banking practices that were later to sink Bear Stearns, Lehman Brothers, and the whole economy. Meanwhile, in the early aughts Geithner proved a sightless sheriff at the New York Fed, ignoring accounting tricks, falsified warranties on mortgage securities, and weakened underwriting standards. He was told in 2005 that Fed supervision of Citigroup was inadequate but did nothing to strengthen it.

In late summer 2008, banking practices that Glass-Steagall would have barred combined with lax regulation to produce the worst financial disaster since 1929. Citigroup ended up getting a bailout of almost half a trillion dollars. The sum of money required to make good on all the bad bets and misconduct came to $12.8 trillion, Bloomberg News calculated—not much less than the output of the entire economy in 2009.

In contrast to the conviction of more than 1,000 high-level executives following the savings-and-loan scandals of the early 1990s, bankers not only avoided prosecution but turned this disaster into a boon. One major beneficiary was Jamie Dimon of JPMorgan Chase. His 2013 pay package came to $18.5 million, a 74 percent increase over 2012. He owns bank stock and options worth north of $400 million at today’s prices. Chase continues in the routine business of retail banking, taking paychecks as deposits and issuing credit cards and loans. It also underwrites stocks and bonds while selling insurance, thanks to the absence of Glass-Steagall. Chase still places huge bets in the casino game of swapping derivatives, too. In spring 2012, gambling by a Chase trader known as the “London whale” lost more than $6 billion, resulting in a $920 million fine. But what does Dimon need to worry about? These risky bets are placed with the implicit backing of taxpayers should anything go wrong, as it surely will again.

 

Prins notes that the six big banks agreed to $80 billion in fines following the 2008 disaster. That sounds like a lot. She points out that the amount equals eight-tenths of 1 percent of their assets, the kind of insignificant penalty that The New York Times’ Gretchen Morgenson dismisses as the equivalent of a rounding error.

Instead of busting up these banks, as the great reformer Theodore Roosevelt once called for because “malefactors of wealth” were holding back the economy, Washington today props them up. The zero-interest-rate policy of the Fed and its buying of tens of billions of dollars of bonds held by banks each month is a subtle subsidy that an International Monetary Fund paper shows comes to $83 billion per year. That is more than the cost of food stamps. The intent of zero interest is to boost the economy, and many progressives support the policy, but it also savages the savings of retirees and raises the costs of pensions.

Attorney General Eric Holder has said he is afraid to prosecute the biggest banks because doing so may upset the economy. In effect, Holder defends looking the other way at frauds documented by the Financial Crisis Inquiry Commission, whose report detailing Wall Street criminality Congress instantly tossed in the round file.

America has nearly 7,000 banks. Just two of the biggest—Citigroup and Chase—hold $4.3 trillion in assets, or more than 30 cents out of every banking dollar. Banking—from retail lending and the underwriting of securities to insurance, trading, and derivatives gambling—accounts for more than 40 percent of corporate profits, which are at record highs and rising. Historically, the figure was half that or less. The growth in money made from handling money comes at the expense of other activities. Banking, after all, is a facilitator, the oil that lubricates the engines of production. Banking is crucial, but bankers accumulate money from wealth created by others. Tom Wolfe wrote about this in The Bonfire of the Vanities: Wall Street bakes golden cakes, and bond traders, like the fictional Sherman McCoy, grow rich by selling and reselling slices of these cakes, pocketing the golden crumbs that fall off in the process.

Prins quotes Teddy Roosevelt saying in 1905, “This country has nothing to fear from the crooked man who fails. We put him in jail. It is the crooked man who succeeds who is a threat to this country.” The easy profits in banking mislead us into believing finance is the source of our prosperity; when bad decisions threaten the banks, we must pay up, or else. Prins shows how six banks that should have been allowed to fail got their bailouts and how the new regulatory law, Dodd-Frank, is weak gruel, damaging the economy going forward. Now that the regulations that ensured sound banks are gone, presidents (and Congresses) must tap the taxpayers, or the economy really can fall into a disaster much worse than the Great Recession. 

“America operates on the belief that if its biggest banks are strong, the nation will be too,” Prins concludes. But the banks are only big, not strong. Indeed, the “stress tests” to determine if the banks can withstand another financial shock are designed to test only for minor upsets, rigging the game in favor of the Big Six, which all engage in unsound practices, especially trading in derivatives. They remain big because of bad laws and enablers like Geithner and because politicians desperate for campaign donations listen to the pleas of bank owners more than those of customers. So the bankers live in grand style, lavished with subsidies that cost us more than food stamps for the poor. In return for this largesse, the bankers savage our modest savings.

Original Story
Media of Type text

How Big Data Could Undo Our Civil-Rights Laws

Big Data will eradicate extreme world poverty by 2028, according to Bono, front man for the band U2. But it also allows unscrupulous marketers and financial institutions to prey on the poor. Big Data, collected from the neonatal monitors of premature babies, can detect subtle warning signs of infection, allowing doctors to intervene earlier and save lives. But it can also help a big-box store identify a pregnant teenager—and carelessly inform her parents by sending coupons for baby items to her home. News-mining algorithms might have been able to predict the Arab Spring. But Big Data was certainly used to spy on American Muslims when the New York City Police Department collected license plate numbers of cars parked near mosques, and aimed surveillance cameras at Arab-American community and religious institutions.

Until recently, debate about the role of metadata and algorithms in American politics focused narrowly on consumer privacy protections and Edward Snowden’s revelations about the National Security Agency (NSA). That Big Data might have disproportionate impacts on the poor, women, or racial and religious minorities was rarely raised. But, as Wade Henderson, president and CEO of the Leadership Conference on Civil and Human Rights, and Rashad Robinson, executive director of ColorOfChange, a civil rights organization that seeks to empower black Americans and their allies, point out in a commentary at TPM Cafe, while big data can change business and government for the better, “it is also supercharging the potential for discrimination.”

In his January 17 speech on signals intelligence, President Barack Obama acknowledged as much, seeking to strike a balance between defending “legitimate” intelligence gathering on American citizens and admitting that our country has a history of spying on dissidents and activists, including, famously, Dr. Martin Luther King, Jr. If this balance Obama seems precarious, it’s because the links between historical surveillance of social movements and today’s uses of Big Data are not lost on the new generation of activists.

“Surveillance, big data and privacy have a historical legacy,” says amalia deloney, policy director at the Center for Media Justice, an Oakland-based organization dedicated to strengthening the communication effectiveness of grassroots racial justice groups. “In the early 1960s, in-depth, comprehensive, orchestrated, purposeful spying was used to disrupt political movements in communities of color—the Yellow Peril, the American Indian Movement, the Brown Berets, or the Black Panthers—to create fear and chaos, and to spread bias and stereotypes.”

In the era of Big Data, the danger of reviving that legacy is real, especially as metadata collection renders legal protection of civil rights and liberties less enforceable.

Undoing Civil Rights Law

The social movements of the 1960s and 1970s organized explicitly against racial profiling, real estate redlining, and discrimination in lending, employment, and education, resulting in some of the most comprehensive civil rights legislation in American history. Title VI of the Civil Rights Act of 1964 prohibited discrimination on the grounds of race in federally assisted programs, including local and state law enforcement and federal intelligence agencies. The 1968 Fair Housing Act made it illegal to deny housing or mortgage loans based on race, color, national origin, religion, sex, familial status or handicap. The Equal Credit Opportunity Act of 1974 made it illegal for creditors to base their decisions on race, religion, national origin, marital status or participation in public assistance programs.

But current Big Data practices threaten to turn the clock back on these victories and to unravel decades of collective work to achieve democracy, equity and racial justice. For example, intelligence fusion centers, charged by the Department of Homeland Security with sharing data between public agencies and private entities to improve policing and counterterrorism measures, infringe on civil rights when they encourage reporting of “suspicious activity” in ways that lead to religious and racial profiling.

A 2009 prevention awareness bulletin from the North Central Texas Fusion Center, for instance, called on law enforcement to report individuals who profess “an aggressive, pro-Islam agenda,” the hallmarks of which included the promotion of religious tolerance, accommodation of American Muslims, and the exercise of free speech and assembly. In the same year, a Virginia fusion center released a report that designated a number of Historically Black Colleges and Universities as potential hubs for extremist and terrorist groups. The U.S. Senate Permanent Subcommittee on Investigations recently questioned the accuracy and usefulness of fusion center intelligence, but the data they collect are nevertheless widely shared among local, state and federal agencies, private sector organizations, and foreign allies.

A New Kind of Redlining

Reverse redlining” also uses new data collection and mining techniques to revive outlawed discriminatory practices. In traditional redlining, federal agencies collected demographic and other data to draw red lines around minority neighborhoods, characterizing them as “high-risk investments,” and encouraging banks to refuse loans in these areas. Reverse redlining inverts this process. Financial institutions use metadata purchased from data brokers to split the real estate market into increasingly sophisticated micro-populations that are slapped with labels such as “Rural and Barely Making It,” “X-tra Needy,” and “Ethnic Second-City Strugglers”—categories that are clearly proxies for race and class—and then target these communities for subprime lending, payday loans, or other exploitative financial products. Reverse redlining is not seen as discriminatory by financial institutions, and is not currently illegal. In fact, it is often characterized as an inclusionary practice: a form of online marketing that provides access to financial products in “underbanked” neighborhoods.

But reverse redlining weakens the Civil Rights Act and undercuts fair lending laws. It robs whole communities of wealth and access to reasonable credit.

“This data-based discrimination strips crucial resources from working families and communities of color in ways that allow disadvantage to accumulate over time,” says Seeta Gangadharan, senior research fellow at the Open Technology Institute of the New America Foundation. “During the mortgage crisis, it is clear that African American and Latino families were targeted by the subprime industry—including by targeted online advertising. These families lost their homes and were foreclosed on at higher rates than other groups. Now these families have bad credit and that determination is sticking with them as they try to get out of debt.”

According to Terry O’Neill, president of the National Organization for Women (NOW), women, transpeople, and LGBTQ communities also face unique challenges in the new world of Big Data. While federal law prohibits basing hiring decisions on gender, marital status, pregnancy, age or disability, and many states outlaw discrimination on the basis of sexual orientation, a quick perusal of Facebook might subtly influence who makes the first cut for a job interview. While most conversations about surveillance focus on the state or corporations as the “watchers,” stalkers and abusers also use high-tech data gathering devices to track and harass their victims.

Research by the Consumer Federation of America and the Center for Responsible Lending have established that subprime loans were especially vigorously marketed to single mothers, especially African American women and Latinas. “Women don’t have the financial cushion to be able to withstand these kinds of rip-offs,” says O’Neill, “It’s a huge hurdle in the way of economic security because we only earn 77 cents to men’s dollar. Latinas only earn 59 cents to the dollar.”

Civil Rights Principles for a New Era

Big Data and surveillance are unevenly distributed. In response, a coalition of fourteen progressive organizations, including the ACLU, ColorOfChange, the Leadership Conference on Civil and Human Rights, the NAACP, National Council of La Raza, and the NOW Foundation, recently released five “Civil Rights Principles for the Era of Big Data.” In their statement, they demand:

  • An end to high-tech profiling;

  • Fairness in automated decisions;

  • The preservation of constitutional principles;

  • Individual control of personal information; and

  • Protection of people from inaccurate data.

This historic coalition aims to start a national conversation about the role of big data in social and political inequality. “We’re beginning to ask the right questions,” says O’Neill. “It’s not just about what can we do with this data. How are communities of color impacted? How are women within those communities impacted? We need to fold these concerns into the national conversation.”

Coalition members hope that the principles will result in greater transparency in the ways information is collected and shared, and increased digital literacy investments in communities that are most directly impacted. They challenge the President’s Council of Advisors on Science and Technology to address the civil rights principles directly in the recommendations that result from its 90-day “Big Data and the Future of Privacy” study, which ends April 23. Whether or not the White House chooses to center the Civil Rights Principles in its efforts, “the document is proliferating across the country, and different are groups signing on,” says deloney of the Center for Media Justice. “From the Beltway to local communities, the language of these visionary principles really resonates.”

What is clear is that social justice in the age of big data is not just about access, privacy or consumer protection. “Many folks whose rights are violated through Big Data may not be using computers or smart phones,” says Rashad Robinson, executive director of ColorOfChange. “But decisions are being made about their opportunities based on how their data is being collected and shared. Big Data has the potential to impact our lives through every single issue that we care about: voting rights, economic rights, the courts and the criminal justice system, safety in our communities. We need to ensure that new policies are created, not just from a consumer and business perspective, but from a civil rights perspective.”

 

Original Story
Media of Type text

Manly Men Condemn Obama's Lack of Manliness

Here's a question: If Hillary Clinton becomes president, what are conservatives going to say when they want to criticize her for not invading a sufficient number of other countries? I ask because yesterday, David Brooks said on Meet the Press that Barack Obama has "a manhood problem in the Middle East." Because if he were more manly, then by now the Israelis and Palestinians would have resolved their differences, Iraq would be a thriving, peaceful democracy, and Iran would have given up its nuclear ambitions. Just like when George W. Bush was president, right?

It really is remarkable how persistent and lacking in self-awareness the conservative obsession with presidential testosterone is. Here's the exchange:

DAVID BROOKS: And, let's face it, Obama, whether deservedly or not, does have a (I'll say it crudely) but a manhood problem in the Middle East: Is he tough enough to stand up to somebody like Assad, somebody like Putin? I think a lot of the rap is unfair. But certainly in the Middle East, there's an assumption he's not tough--

CHUCK TODD: By the way, internally, they fear this. You know, it's not just Bob Corker saying it, okay, questioning whether the president is being alpha male. That's essentially what he's saying: He's not alpha dog enough. His rhetoric isn't tough enough. They agree with the policy decisions that they're making. Nobody is saying-- but it is sort of the rhetoric. Internally this is a question.

Because Brooks is a somewhat moderate conservative who writes for a paper read mostly by liberals, he naturally equivocates a little, distancing himself from the assessment even as he's making it. Chuck Todd too trots out the passive voice, to impute this decision to nameless others. "Internally this is a question"—what does that mean, exactly? That members of the White House staff spend their days fretting about the President's manliness?

This kind of infantile conception of foreign affairs, where countries and leaders don't have interests or incentives or constraints that need to be understood in order to act wisely, but all that matters is whether you're "tough" and "strong," is distressingly common among people on the right who think of themselves as foreign policy experts.

And of course, neither Brooks nor Todd says exactly what form the manliness they wish to see in Barack Obama ought to take. Should he challenge a group of neighborhood toughs to a fight? Overhaul the transmission on the presidential limousine? Shoot an animal or two? (And by the way, a child can shoot an animal—if you want to convince me hunting is manly, I'll believe it when you kill a mountain lion with your bare hands.) 

As Todd says, "it is sort of the rhetoric," meaning that the only bit of "toughness" they can imagine is rhetorical toughness. If Obama would start droppin' his "g"s, maybe squint his eyes when he's mad like Dubya used to do, and issue the occasional threat—"If you go any farther, you're gonna be sorry, pardner"—then other countries would do exactly what we want them to. Oh wait, I know what he should do: land on an aircraft carrier, then strut around for a while in a flight suit.

Back in the real world, that isn't just idiotic, it doesn't actually work. Again, George W. Bush was about as "tough" as they come by these standards, and no sane person could argue that made his foreign policy brilliant and effective.

So the next time anyone says Obama should be "tougher" or "stronger" or "more manly," they ought to be asked exactly what actions they're recommending. And if they say it's a matter of rhetoric, then the next question should be, "Do you believe that a change in Obama's rhetoric would fundamentally alter the situation in [Ukraine, Syria, wherever]? They'll probably respond, "Of course not, but…" And that's all you need to hear. 

Original Story
Media of Type text

A Chance to Remake the Fed

Janet Yellen

Janet Yellen has only chaired the Federal Reserve for a few months, but you could forgive her if she feels like the new kid in school that nobody wants to sit with at lunchtime. With the resignation of Jeremy Stein earlier this month, there are only two confirmed members of the seven-member Board of Governors: Yellen and Daniel Tarullo. Three nominees—Stan Fischer, Lael Brainard and Jerome Powell, (whose term expired but has been re-nominated)—await confirmation from the Senate. Another two slots are vacant, awaiting nominations. One consequence of the shortage of Fed governors is that regional Federal Reserve Bank presidents, chosen by private banks, now outnumber Board members at monetary policy meetings, allowing the private sector to effectively dictate monetary policy from the inside, and creating what some call a constitutional crisis.

The need for two more nominees, however, provides an opportunity to reunite the progressive coalition that prevented Larry Summers from getting nominated as Fed Chair last year. By pursuing a nominee dedicated to tougher oversight of the financial industry, reformers can make their mark on the central bank for years to come. But it was much easier to reject a known quantity like Summers, especially with an obvious, historic alternative in Yellen. Who will progressives demand this time to represent their interests at the Fed?

An odd tradition has sprung up organically at the Fed, in which open seats are unofficially earmarked for certain coalitions and interest groups. Elizabeth Duke’s resignation last year created an opportunity to fill the “community banker seat,” for example, and senators in both parties have asked for someone with community banking experience to replace her. There are seats traditionally designated for academic economists and international banking experts, and seats for people with Wall Street experience, all so the Board can cover its bases and have a go-to expert for each of its various responsibilities. Maybe the Fed, with such a powerful role to play in the economy, shouldn’t get assembled like the Super Friends, but that’s how it’s traditionally been done.

Since 2010, progressives could take heart in a public interest/consumer protection seat, held by Sarah Bloom Raskin, Maryland’s former chief banking regulator. But the White House basically made that seat disappear in the aftermath of the Yellen-Summers brouhaha. President Barack Obama nominated Raskin to the number two job at the Treasury Department, and replaced her with Lael Brainard, a Treasury official from the Tim Geithner era and a loyal soldier for the Administration’s viewpoint, which has tended to be more moderate on financial reform issues. Stan Fischer fulfilled the academic and international banking monetary policy roles (in addition to bringing Wall Street cred to the table, thanks to his former position as a senior executive at Citigroup). Neither nominee carries the same understanding of the relationship between financial markets and ordinary people as Raskin does. The Stein resignation offers a chance to revive the “Main Street seat,” and install somebody focused on protecting the public.

Raskin’s absence can be felt in recent regulatory decisions. Since Yellen’s term as chair began, the Fed has certainly focused more on financial regulation than it did in previous years, taking control of decision-making from the staff level and giving it regulation the status of an unofficial third mandate, behind full employment and price stability. But so far, that focus has mostly amounted to rhetoric; the actions have been a mixed bag. 

On the positive side, the Fed rejected Citigroup’s plan to increase its shareholder dividend after the financial giant failed a stress test. And last week, the Fed released rules for the “supplementary leverage ratio,” which limits how much the eight largest banks can borrow, a way to reduce risk and ensure that Wall Street can pay for its own losses, rather than taxpayers.

However, the leverage rules are modest; they really only limit big bank borrowing to $95 for every $100 it lends out, instead of $97. Plus, the rule doesn’t take effect until 2018, giving lobbyists ample opportunity to weaken it further. On the same day as the leverage ratio announcement, the Fed delayed part of the Volcker rule, giving banks two more years to divest themselves of collateralized loan obligations, packaged securities of risky loans which have become all too prevalent of late. 

Yellen herself has said that more stringent rules may be needed to limit the conditions that triggered the 2008 financial crisis, but the middling approach she has taken thus far suggests she doesn’t have the right colleagues with the courage to face down Wall Street and make it happen. While Daniel Tarullo is the point person for financial regulation, he needs allies that can nudge him toward deeper reforms and shift the center of gravity on these issues. 

Progressives had enough power to stop Summers, but moving from opposition to proposition forces them to coalesce around a specific individual to replace Stein, which hasn’t yet happened. There are some obvious names available. Sheila Bair, former head of the Federal Deposit Insurance Corporation (FDIC), would serve nicely in a reform role, as would Simon Johnson, the previous chief economist for the International Monetary Fund (IMF), a stalwart on the need to effectively regulate the financial system. Both of these prospects have enemies inside the White House, but they are also eminently qualified for the job and would give ordinary Americans a voice inside a powerful institution.

Other alternatives would be easier to get past the White House. Jared Bernstein, the former chief economist to Vice President Joe Biden, has been floated, though he focuses more on full employment than financial regulation. Andrew Green, legislative counsel to Sen. Jeff Merkley (D-OR), is another possibility; during the Dodd-Frank debate, he helped write the Volcker rule.

One possible name who could get widespread support is Elise Bean, the chief counsel on the Senate Permanent Subcommittee on Investigations chaired by Sen. Carl Levin (D-MI), which has churned out several aggressive reports detailing the perfidy of banks like Goldman Sachs and JPMorgan Chase. Bean has respect on both sides of the aisle and a wealth of knowledge about how the financial system works in the real world. Many reformers, who preferred to remain anonymous amid ongoing discussions on the matter, consider Bean the leading choice. 

Even the “community banker seat” could result in a strong financial regulator. Thomas Honeig, currently vice-chair of the FDIC, is an outspoken proponent of tighter regulation, and has been instrumental in the few stronger reforms that have taken place. He has the support of the Independent Community Bankers Association, stemming from his past as a community bank examiner and the president of the Kansas City Federal Reserve, the only district composed entirely of community banks.

Hoenig, an old-line Republican, has a reputation as a hawk on monetary policy and inflation, but he would be outnumbered in that perspective on the Fed board, and some believe his strength and influence on financial regulation outweigh the monetary policy issues. He would force Tarullo to consider more radical measures, and give a community bank perspective on industry consolidation and the resulting abuses to consumers.

Appointments such as these could have a major impact well into the future, especially if the nominees pledge to serve full 14-year terms. Lately, Fed governors have served fewer years, with Jeremy Stein’s tenure one of the shortest on record. (Stein likely saw he would have less influence without his Harvard colleague Summers as Chair, and quickly departed.) But with a longer commitment, financial reformers could establish a long-standing presence at the central bank, well into the next few presidential terms.

This will require consensus among progressives, who found it easy to oppose Summers for Yellen, but must now determine who exactly to support, and how to force agreement from an Administration that got its way on the last set of nominees. The Administration knows control of the Senate is in peril, and the window of opportunity to make its mark on the Federal Reserve is closing fast. Progressives can leverage this, and restore the Main Street seat.

This would establish the principle that the Fed has a responsibility not only to use monetary policy to keep the economy moving, but to use regulatory policy to keep a lid on the capital markets. Ironically, it was Stein who tried to merge these two, suggesting that the Fed raise interest rates to depress asset bubbles. That choice to deliberately decelerate the economy would harm ordinary people, and reveals the Fed’s blind spots toward Main Street. There’s an opportunity to reverse that; progressives simply need to flex their muscles again.

Original Story
Media of Type text

White Supreme Court Case Winner Challenges Black High School Student to Debate on Affirmative Action

Brooke Kimbrough always dreamed of becoming a University of Michigan Wolverine. Her score on the ACT--a college readiness evaluation test--dwarfs the scores of most of her classmates. Earlier this month, she was part of a winning team at the National Urban League Debate Championship in Washington, D.C. Last week, she became a powerful symbol for exactly how Michigan's race-blind college admissions policies have failed.

In December, the University of Michigan informed Kimbrough that her application for admission had been wait-listed. Two months later, she received the letter that she had not been accepted. But instead of conceding defeat, Kimbrough decided to fight. Today she hopes that her story will highlight how Michigan's current approach to race in admissions fails exceptional students of color. Black students comprise just 4.6 percent of the 2012 freshman class; in 2008, the number was 6.8 percent.

Over the course of this year I had the honor of working with University Preparatory Academy debate coach Sharon Hopkins, who guided Kimbrough and her partner, Rayvon Dean, to victory. Shortly after her team won the debate championship, I spoke with Kimbrough about her protest of U-M’s admission policy.

"This isn't about me," Kimbrough told me. "That's not why I'm doing this. The real problem is when students are denied and don't speak up, don't question the system that failed them." To that end, Kimbrough has joined with the Coalition to Defend Affirmative Action, Integration, and Immigrant Rights and Fight for Equality By Any Means Necessary (BAMN) to advocate for the rights of black students in admissions and on campus.

Nearly fifteen years ago, Jennifer Gratz, a white high school senior, was denied admission to the University of Michigan. Rather than keeping quiet, she also fought. Gratz began by mounting a coordinated legal and media battle to challenge her rejection. In 2003, the Supreme Court ruled in her favor, and its decision in Gratz v. Bollinger ended the university's system of preferential admission based on race. Encouraged by this victory, Gratz and other opponents of affirmative action went on to champion a statewide ballot initiative that completely banned any use of race as a criteria for admissions at Michigan's public universities.

At U-M, the years following the high court’s decision have seen a precipitous drop in the number of African-American students. For Kimbrough, who uses discussions of racial privilege and cultural politics in her debate competitions, her rejection from Michigan became an opportunity to highlight a concrete instance of colorblind discrimination.

While both Gratz and Kimbrough fought their decisions, Jennifer Gratz bristles at comparisons. "I fought for all applicants to be treated equally--as individuals, without regard to race," Gratz said in a comment on the Detroit News website. “This woman is standing up for group rights and asking for preferential treatment based on race while others are discriminated against, she wants unequal treatment. Ms. Kimbrough is fighting because she wasn't accepted; I fought because of discrimination in the admissions process, a major difference."

What critics of affirmative action like Gratz don't talk about, and what they are deeply invested in keeping hidden, is the racial violence and culture of white aggression that intensifies and pervades campus life for students of color when affirmative action policies are taken away. This dirty secret was blown apart earlier this year when University of Michigan's Black Student Union decided that they'd had enough, launching the viral Twitter campaign known as #BBUM (Being Black at U-M). Stories ranged from hurtful micro-aggressions to racial slurs to threats of physical violence. Overall #BBUM highlighted the dysphoria of a campus population of color whose number is in steep decline.

But so incensed is Gratz, 37, that she has challenged Kimbrough, 17, to a public debate on the issue of affirmative action, according to the Detroit Free Press. In an apparent attempt to appear gracious, Gratz, citing her potential opponent's youth and inexperience, offered to allow Kimbrough to include a BAMN representative on her side of the debate.

The attitude expressed by Gratz betrays a seemingly willful obliviousness to the fact that no group experiences more affirmative action than white people. Michigan's formal pro-white affirmative action policy, colloquially known as "legacy preference,” puts the children of alumni ahead of other applicants. It unquestionably favors the white and the wealthy, at the expense of the poor and the black. Outside of the U.S., legacy admissions mostly went the way of feudalism. But at many U.S. universities, and especially at Michigan, legacy admissions amount to an eternal parade of white pride.

Why does legacy preference work this way? Because it reinforces the demographic power of previous generations of whites that benefited from dozens of explicitly segregationist federal and state institutions. Those institutions, from the New Deal to the G.I. Bill, helped whites out of Depression-era poverty while explicitly disadvantaging blacks, locking whole communities into cycles of violence and misery. "When I think about the fact that my grandmother's grandmother was a slave baby--like, literally owned as property--and then I hear people talk about how whites don't experience affirmative action from legacy, it's so frustrating," said Kimbrough. "People want to put that behind them. They'd prefer not to think about it. The thing is that black people can't put that history behind us because we live it every day."

And legacy doesn't even scratch the surface of the biggest instrument of racial discrimination in so-called "race-blind" university admissions: standardized testing. Most scholars of education policy agree that the ACT testing process, like the SAT, favors wealthy white students from suburban environments at the expense of students who are poor black and urban. This favoritism is often deemed a "necessary evil" of education policy, done in the service of meritocratic apples-to-apples comparisons of students' analytical skills. There are many reasons for performance disparities, from cultural assumptions of the test writers to unequal access to prep materials and tutors.

"We don't have time to prep for the ACT the way some students do," Kimbrough explains. "I come from a single-parent household. We worry about keeping the lights on and food on the table. Even though I want to go to college, people have to understand that the ACT isn't a priority. Michigan talks about ‘holistic’ admissions. I wonder what's so holistic about it." Standardized testing is literally the example given in sociological texts to define the term "institutional racism".

It must be nice to live in the world of Jennifer Gratz. It is a world in which America somehow happened without colonialism or slavery, where we are born into bodies in which race is invisible (which is how the concept of race generally functions for members of the white majority). In Gratz's worldview, disparities in wealth and access to public goods have no bearing on the measure of that mystical quotient of "ability".

"Public universities are supposed to represent us," says Kimbrough. "Blacks and Latinos are 14 percent of the population, and yet our public universities can't represent us. We pay taxes for that university to stand as tall as it does. It's sad."

Original Story
Media of Type text

Republicans on the ACA: Wrong, but Rational

I find it strange," said Barack Obama on Thursday as he announced that the total of Americans getting private insurance through the exchanges has now exceeded 8 million, "that the Republican position on this law is still stuck in the same place that it has always been. They still can't bring themselves to admit that the Affordable Care Act is working."

But it really isn't so strange. The Republicans' continued refusal to grant that anything good could possibly come from a law they've fought so bitterly for five years, even as encouraging news continues to roll in, is quite understandable. What's more, it's perfectly rational, even when all the predictions they made about its inevitable self-destruction fail to come true.

Therein lies one of the paradoxes of our politics: At times, the most rational politician is the one who appears to be acting like a fool. 

Let's say that you're a Republican running for Senate. Perhaps you're whichever congressman will out-crazy his primary opponents to get the GOP nomination in Georgia, or you're Senate minority leader Mitch McConnell, in a fight to hold his Kentucky seat. What would you get from acknowledging that the Affordable Care Act (ACA) isn't turning out too badly after all? That would mean that everything you've been saying for years—every apocalyptic prediction, every moral condemnation, every fist-shaking denunciation of creeping socialist tyranny—has been wrong. It would also mean saying something nice about Barack Obama, who is still deeply unpopular in your state, and more important, among the voters you need in November. In fact, you can win this election without the vote of a single citizen who feels warmly toward Obama, so why say anything favorable about him or his policies at all?

And it isn't as though those citizens are particularly aware of what's going right about the ACA. They heard about the disastrous debut of healthcare.gov, that's for sure. But since then? The news has flown right past most of them. Every bad thing they can see, from the premium increase they get every year to their next sprained ankle, will get blamed on the president, while Obama won't get credit for anything that turns out right. As it happens, McConnell's Kentucky has one of the most efficient and successful state health-care exchanges, called Kynect. But the people who run it would sooner curse their own mothers from the floor Rupp Arena than utter the word "Obamacare" (quite rationally). There may be no single statement that sums up the whole evolution of this law better than that of the man quoted in a Huffington Post article from last August, who looked over information about Kynect at a booth at the state fair and, visibly impressed, said, "This beats Obamacare I hope."

This gentleman's confusion—and there are millions more like him—is just fine with the senior senator who represents him. If the ACA isn't going to fail utterly, the next best thing is for no one to know it's succeeding. There might be a cost to denying that success if most voters understood it, but as long as they remain unaware, the rational thing for McConnell to do is keep saying everything is going horribly. Eventually, certain facts about the ACA will penetrate even to Republican voters in the Bluegrass State. At that point, it would be a mistake to deny those facts, because then you'd look like you were insanely determined to keep fighting a war you'd already lost. But for the moment, the rational Republican candidate will keep saying that Obamacare is a disaster that must be repealed, and we'll get that taken care of as long as you vote me back to Washington

The rational politician is one who knows how to maneuver around—and exploit—his constituents' irrationality. At times though, that irrationality can enslave the politician to positions that are bad for his party, even as they make perfect sense to him at a particular moment. For instance, the Republican party needs to pass immigration reform to show Hispanic voters it isn't hostile to them, particularly in the face of a growing Hispanic population. But if you're a Republican congressman in a conservative, majority-white district, what's good for the national party would be deeply irrational for you. So you condemn illegal aliens and pledge to fight against comprehensive reform, then get safely reelected, along with a couple hundred of your colleagues who do the same thing. And the reform that your party needs never comes to pass. In other words, politicians can only be as rational as their constituents allow. 

In the 1990s, in the face of decades of research showing American citizens to be alarmingly uninformed about the policy matters their representatives were called upon to decide, some political scientists attempted to redeem the public by looking at the picture from different angles. Samuel Popkin wrote a book in 1991 called The Reasoning Voter, in which he contended that making decisions on simple heuristics (or cognitive shortcuts) makes much more sense than spending a lot of time poring over position papers. "Low-information rationality," Popkin argued, is still rationality. He opened the book with a story about Gerald Ford getting handed a tamale and biting into it without removing the corn husk, supposedly telling Hispanic voters more than enough about his understanding of their concerns. In 1992, Benjamin Page and Robert Shapiro wrote a book called The Rational Public, which looked over decades of poll results to argue that however ignorant or confused individual voters may be, in the aggregate, the public has stable beliefs and makes reasonable decisions.

These scholars weren't wrong to look at the glass of public ignorance as half full. There are ways in which the public as a whole is rational—at the right times, and considered in the right way. But it's also undeniable that in the short term, they're often uninformed, capricious, and easily misled. Smart politicians understand that and adjust accordingly. So the next time you see a Republican candidate saying that Obamacare is well on its way to implosion and will destroy America along the way, remember that regardless of the facts, he isn't acting foolishly. He may be denying reality, he may be appealing to his constituents' worst instincts, and he may be making them dumber along the way. But he's doing the rational thing.  

Original Story
Media of Type text

When Shareholder Capitalism Came to Town

It was only 20 years ago that the world was in the thrall of American-style capitalism. Not only had it vanquished communism, but it was widening its lead over Japan Inc. and European-style socialism. America’s companies were widely viewed as the most innovative and productive, its capital markets the most efficient, its labor markets the most flexible and meritocratic, its product markets the most open and competitive, its tax and regulatory regimes the most accommodating to economic growth.

Today, that sense of confidence and economic hegemony seems a distant memory. We have watched the bursting of two financial bubbles, struggled through two long recessions, and suffered a lost decade in terms of incomes of average American households.

We continue to rack up large trade deficits even as many of the country’s biggest corporations shift more of their activity and investment overseas. Economic growth has slowed, and the top 10 percent of households have captured whatever productivity gains there have been. Economic mobility has declined to the point that, by international comparison, it is only middling. A series of accounting and financial scandals, coupled with ever-escalating pay for chief executives and hedge-fund managers, has generated widespread cynicism about business. Other countries are beginning to turn to China, Germany, Sweden, and even Israel as models for their economies.

No wonder, then, that large numbers of Americans have begun to question the superiority of our brand of free-market capitalism. This disillusionment is reflected in the rise of the Tea Party and the Occupy Wall Street movements and the increasing polarization of our national politics. It is also reflected on the shelves of bookstores and on the screens of movie theaters.

Embedded in these critiques is not simply a collective disappointment in the inability of American capitalism to deliver on its economic promise of wealth and employment opportunity. Running through them is also a nagging question about the larger purpose of the market economy and how it serves society.

In the current, cramped model of American capitalism, with its focus on maximizing output growth and shareholder value, there is ample recognition of the importance of financial capital, human capital, and physical capital but no consideration of social capital. Social capital is the trust we have in one another, and the sense of mutual responsibility for one another, that gives us the comfort to take risks, make long-term investments, and accept the inevitable dislocations caused by the economic gales of creative destruction. Social capital provides the necessary grease for the increasingly complex machinery of capitalism and for the increasingly contentious machinery of democracy. Without it, democratic capitalism cannot survive.

It is our social capital that is now badly depleted. This erosion manifests in the weakened norms of behavior that once restrained the most selfish impulses of economic actors and provided an ethical basis for modern capitalism. A capitalism in which Wall Street bankers and traders think peddling dangerous loans or worthless securities to unsuspecting customers is just “part of the game,” a capitalism in which top executives believe it is economically necessary that they earn 350 times what their front-line workers do, a capitalism that thinks of employees as expendable inputs, a capitalism in which corporations perceive it as both their fiduciary duty to evade taxes and their constitutional right to use unlimited amounts of corporate funds to purchase control of the political system—that is a capitalism whose trust deficit is every bit as corrosive as budget and trade deficits.

As economist Luigi Zingales of the University of Chicago concludes in his recent book, A Capitalism for the People, American capitalism has become a victim of its own success. In the years after the demise of communism, “the intellectual hegemony of capitalism, however, led to complacency and extremism: complacency through the degeneration of the system, extremism in the application of its ideological premises,” he writes. “‘Greed is good’ became the norm rather than the frowned-upon exception. Capitalism lost its moral higher ground.”

Pope Francis recently gave voice to this nagging sense that our free-market system had lost its moral bearings. “Some people continue to defend trickle-down theories, which assume that economic growth, encouraged by a free market, will inevitably succeed in bringing about greater justice and inclusiveness in the world,” wrote the new pope in an 84-page apostolic exhortation. “This opinion, which has never been confirmed by the facts, expresses a crude and naïve trust in the goodness of those wielding economic power and in the sacralized workings of the prevailing economic system.”

Our challenge now is to restore both the economic and moral legitimacy of American capitalism. And there is no better place to start than with a reconsideration of the purpose of the corporation.

 

“MAXIMIZING SHAREHOLDER VALUE”

In the recent history of bad ideas, few have had a more pernicious effect than the one that corporations should be managed to maximize “shareholder value.”

Indeed, much of what we perceive to be wrong with the American economy these days—the slowing growth and rising inequality, the recurring scandals and wild swings from boom to bust, the inadequate investment in research and development and worker training—has its roots in this misguided ideology.

It is an ideology, moreover, that has no basis in history, in law, or in logic. What began in the 1970s and 1980s as a useful corrective to self-satisfied managerial mediocrity has become a corrupting self-interested dogma peddled by finance professors, Wall Street money managers, and overcompensated corporate executives.

Let’s start with the history. The earliest corporations, in fact, were generally chartered not for private but for public purposes, such as building canals or transit systems. Well into the 1960s, corporations were broadly viewed as owing something in return to the community that provided them with special legal protections and the economic ecosystem in which they could grow and thrive.

Legally, no statutes require that companies be run to maximize profits or share prices. In most states, corporations can be formed for any lawful purpose. Lynn Stout, a Cornell law professor, has been looking for years for a corporate charter that even mentions maximizing profits or share price. So far, she hasn’t found one. Companies that put shareholders at the top of their hierarchy do so by choice, Stout writes, not by law.

Nor does the law require, as many believe, that executives and directors owe a special fiduciary duty to the shareholders who own the corporation. The director’s fiduciary duty, in fact, is owed simply to the corporation, which is owned by no one, just as you and I are owned by no one—we are all “persons” in the eyes of the law. Corporations own themselves.

What shareholders possess is a contractual claim to the “residual value” of the corporation once all its other obligations have been satisfied—and even then the directors are given wide latitude to make whatever use of that residual value they choose, just as long as they’re not stealing it for themselves.

It is true, of course, that only shareholders have the power to elect the corporate directors. But given that directors are almost always nominated by the management and current board and run unopposed, it requires the peculiar imagination of corporate counsel to leap from the shareholders’ power to “elect” directors to a sweeping mandate that directors and the executives must put the interests of shareholders above all others.

Given this lack of legal or historical support, it is curious how “maximizing shareholder value” has evolved into such a widely accepted norm of corporate behavior.

Milton Friedman, the University of Chicago free-market economist, is often credited with first articulating the idea in a 1970 New York Times Magazine essay in which he argued that “there is one and only one social responsibility of business—to use its resources and engage in activities designed to increase its profits.” Anything else, he argues, is “unadulterated socialism.”

A decade later, Friedman’s was still a minority view among corporate leaders. In 1981, as Ralph Gomory and Richard Sylla recount in a recent article in Daedalus, the Business Roundtable, representing the nation’s largest firms, issued a statement recognizing a broader purpose of the corporation: “Corporations have a responsibility, first of all, to make available to the public quality goods and services at fair prices, thereby earning a profit that attracts investment to continue and enhance the enterprise, provide jobs and build the economy.” The statement went on to talk about a “symbiotic relationship” between business and society not unlike that voiced nearly 30 years earlier by General Motors chief executive Charlie Wilson, when he reportedly told a Senate committee that “what is good for the country is good for General Motors, and vice versa.”

By 1997, however, the Business Roundtable was striking a tone that sounded a whole lot more like Professor Friedman than CEO Wilson. “The principal objective of a business enterprise is to generate economic returns to its owners,” it declared in its statement on corporate responsibility. “If the CEO and the directors are not focused on shareholder value, it may be less likely the corporation will realize that value.”

The most likely explanation for this transformation involves three broad structural changes that were going on in the U.S. economy—globalization, deregulation, and rapid technological change. Over a number of decades, these three forces have conspired to rob what were once the dominant American corporations of the competitive advantages they had during the “golden era” of the 1950s and 1960s in both U.S. and global markets. Those advantages—and the operating profits they generated—were so great that they could spread the benefits around to all corporate stakeholders. The postwar prosperity was so widely shared that it rarely occurred to stockholders, consumers, or communities to wonder if they were being shortchanged.

It was only when competition from foreign suppliers or recently deregulated upstarts began to squeeze out those profits—often with the help of new technologies—that these once-mighty corporations were forced to make difficult choices. In the early going, their executives found that it was easier to disappoint shareholders than customers, workers, or even their communities. The result, during the 1970s, was a lost decade for investors.

Beginning in the mid-1980s, however, a number of companies with lagging stock prices found themselves targets for hostile takeovers launched by rival companies or corporate raiders employing newfangled “junk bonds” to finance unsolicited bids. Disappointed shareholders were only too willing to sell out to the raiders. So it developed that the mere threat of a hostile takeover was sufficient to force executives and directors across the corporate landscape to embrace a focus on profits and share prices. Almost overnight they tossed aside their more complacent and paternalistic management style, and with it a host of inhibitions against laying off workers, cutting wages and benefits, closing plants, spinning off divisions, taking on debt, moving production overseas. Some even joined in waging hostile takeovers themselves.

Spurred on by this new “market for corporate control,” companies traded in their old managerial capitalism for a new shareholder capitalism, which continues to dominate the business sector to this day. Those high-yield bonds, once labeled as “junk” and peddled by upstart and ethically challenged investment banks, are now a large and profitable part of the business of every Wall Street firm. The unsavory raiders have now morphed into respected private-equity and hedge-fund managers, some of whom proudly call themselves “activist investors.” And corporate executives who once arrogantly ignored the demands of Wall Street now profess they have no choice but to dance to its tune.

 

THE INSTITUTIONS SUPPORTING SHAREHOLDER VALUE

An elaborate institutional infrastructure has developed to reinforce shareholder capitalism and its generally accepted corporate mandate to maximize short-term profits and share price. This infrastructure includes free--market-oriented think tanks and university faculties that continue to spin out elaborate theories about the efficiency of financial markets.

An earlier generation of economists had looked at the stock-market boom and bust that led to the Great Depression and concluded that share prices often reflected irrational herd behavior on the part of investors. But in the 1960s, a different theory began to take hold at intellectual strongholds such as the University of Chicago that quickly spread to other economics departments and business schools. The essence of the “efficient market” hypothesis, first articulated by Eugene Fama (a 2013 Nobel laureate) is that the current stock price reflects all the public and private information known about a company and therefore is a reliable gauge of the company’s true economic value. For a generation of finance professors, it was only a short logical leap from this hypothesis to a broader conclusion that the share price is therefore the best metric around which to organize a company’s strategy and measure its success.

With the rise of behavioral economics, and the onset of two stock-market bubbles, the efficient--market hypothesis has more recently come under serious criticism. Another of last year’s Nobel winners, Robert Shiller, demonstrated the various ways in which financial markets 
are predictably irrational. Curiously, however, the efficient-market hypothesis is still widely accepted by business schools—and, in particular, their finance departments—which continue to preach the shareholder-first ideology.

Surveys by the Aspen Institute’s Center for Business Education, for example, find that
most MBA students believe that maximizing value for shareholders is the most important responsibility of a company and that this conviction strengthens as they proceed toward their degree, in many schools taking courses that teach techniques for manipulating short-term earnings and share prices. The assumption is so entrenched that even business-school deans who have publicly rejected the ideology acknowledge privately that they’ve given up trying to convince their faculties to take a more balanced approach.

Equally important in sustaining the shareholder focus are corporate lawyers, in-house as well as outside counsels, who now reflexively advise companies against actions that would predictably lower a company’s stock price.

For many years, much of the jurisprudence coming out of the Delaware courts—where most big corporations have their legal home—was based around the “business judgment” rule, which held that corporate directors have wide discretion in determining a firm’s goals and strategies, even if their decisions reduce profits or share prices. But in 1986, the Delaware Court of Chancery ruled that directors of the cosmetics company Revlon had to put the interests of shareholders first and accept the highest price offered for the company. As Lynn Stout has written, and the Delaware courts subsequently confirmed, the decision was a narrowly drawn exception to the business--judgment rule that only applies once a company has decided to put itself up for sale. But it has been widely—and mistakenly—used ever since as a legal rationale for the primacy of shareholder interests and the legitimacy of share-price maximization.

Reinforcing this mistaken belief are the shareholder lawsuits now routinely filed against public companies by class-action lawyers any time the stock price takes a sudden dive. Most of these are frivolous and, particularly since passage of reform legislation in 1995, many are dismissed. But even those that are dismissed generate cost and hassle, while the few that go to trial risk exposing the company to significant embarrassment, damages, and legal fees.

The bigger damage from these lawsuits comes from the subtle way they affect corporate behavior. Corporate lawyers, like many of their clients, crave certainty when it comes to legal matters. So they’ve developed what might be described as a “safe harbor” mentality—an undue reliance on well-established bright lines in advising clients to shy away from actions that might cause the stock price to fall and open the company up to a shareholder lawsuit. Such actions include making costly long-term investments, or admitting mistakes, or failing to follow the same ruthless strategies as their competitors. One effect of this safe-harbor mentality is to reinforce the focus on short-term share price.

The most extensive infrastructure supporting the shareholder-value ideology is to be found on Wall Street, which remains thoroughly fixated on quarterly earnings and short-term trading. Companies that refuse to give quarterly-earnings guidance are systematically shunned by some money managers, while those that miss their earnings targets by even small amounts see their stock prices hammered.

Recent investigations into insider trading have revealed the elaborate strategies and tactics used by some hedge funds to get advance information about a quarterly earnings report in order to turn enormous profits by trading on it. And corporate executives continue to spend enormous amounts of time and attention on industry analysts whose forecasts and ratings have tremendous impact on share prices.

In a now-infamous press interview in the summer of 2007, former Citigroup chairman Charles Prince provided a window into the hold that Wall Street has over corporate behavior. At 
the time, Citi’s share price had lagged behind that of the other big banks, and there was speculation in the financial press that Prince would be fired if he didn’t quickly find a way to catch up. In the interview with the Financial Times, Prince seemed to confirm that speculation. When asked why he was continuing to make loans for high-priced corporate takeovers despite evidence that the takeover boom was losing steam, he basically said he had no choice—as long as other banks were making big profits from such loans, Wall Street would force him, or anyone else in his job, to make them as well. “As long as the music is playing,” Prince explained, “you’ve got to get up and dance.”

It isn’t simply the stick of losing their jobs, however, that causes corporate executives to focus on maximizing shareholder value. There are also plenty of carrots to be found in those generous—some would say gluttonous—pay packages, the value of which is closely tied to the short-term performance of company stock.

The idea of loading up executives with stock options also dates to the transition to shareholder capitalism. The academic critique of managerial capitalism was that the lagging performance of big corporations was a manifestation of what economists call a “principal-agent” problem. In this case, the “principals” were the shareholders and their directors, and the misbehaving “agents” were the executives who were spending too much of their time, and the shareholder’s money, worrying about employees, customers, and the community at large.

In what came to be one of the most widely cited academic papers of all time, business-school professors Michael Jensen of Harvard and William Meckling of the University of Rochester wrote in 1976 that the best way to align the interests of managers to those of the shareholders was to tie a substantial amount of the managers’ compensation to the share price. In a subsequent paper in 1989 written with Kevin Murphy, Jensen went even further, arguing
that the reason corporate executives acted more like “bureaucrats than value-maximizing entrepreneurs” was because they didn’t get to keep enough of the extra value they created.

With that academic foundation, and the enthusiastic support of executive--compensation specialists, stock-based compensation took off. Given the tens and, in more than a few cases, the hundreds of millions of dollars lavished on individual executives, the focus
on boosting share price is hardly surprising. The ultimate irony, of course, is that the result
 of this lavish campaign to more closely align incentives and interests is that the “agents” have done considerably better than the “principals.”

Roger Martin, the former dean of the Rotman School of Management at the University of Toronto, calculates that from 1933 until 1976—roughly speaking, the era of “managerial capitalism” in which managers sought to balance the interest of shareholders with those of employees, customers, and society at large—the total real compound annual return on the stocks of the S&P 500 was 7.5 percent. From 1976 until 2011—roughly the period of “shareholder capitalism”—the comparable return has been 6.5 percent. Meanwhile, according to Martin’s calculation, the ratio of chief-executive compensation to corporate profits increased eightfold between 1980 and 2000, almost all of it coming in the form of stock-based compensation.

 

HOW SHAREHOLDER PRIMACY HAS RESHAPED CORPORATE BEHAVIOR

All of this reinforcing infrastructure—the academic underpinning, the business-school indoctrination, the threat of shareholder lawsuits, the Wall Street quarterly earnings machine, the executive compensation—has now succeeded in hardwiring the shareholder-value ideology into the economy and business culture. It has also set in motion a dynamic in which corporate and investor time horizons have become shorter and shorter. The average holding periods for corporate stocks, which for decades was six years, is now down to less than six months. The average tenure of a Fortune 500 chief executive is now down to less than six years. Given those realities, it should be no surprise that the willingness of corporate executives to sacrifice short-term profits to make long-term investments is rapidly disappearing.

A recent study by McKinsey & Company, the blue-chip consulting firm, and Canada’s public pension board found alarming levels of short-termism in the corporate executive suite. According
to the study, nearly 80 percent of top executives and directors reported feeling the most pressure to demonstrate a strong financial performance over a period of two years or less, with only 7 percent feeling considerable pressure to deliver strong performance over a period of five years or more. It also found that 55 percent of chief financial officers would forgo an attractive investment project today if it would cause the company to even marginally miss its quarterly-earnings target.

The shift on Wall Street from long-term investing to short-term trading presents a dilemma for those directing a company solely for shareholders: Which group of shareholders is it whose interests the corporation is supposed to optimize? Should it be the hedge funds that are buying and selling millions of shares in a matter of seconds to earn hedge fund–like returns? Or the “activist investors” who have just bought a third of the shares? Or should it be the retired teacher in Dubuque who has held the stock for decades as part of her retirement savings and wants a decent return with minimal downside risk?

One way to deal with this quandary would be for corporations to give shareholders a
bigger voice in corporate decision-making. But it turns out that even as they proclaim
their dedication to shareholder value, executives and directors have been doing everything possible to minimize shareholder involvement and influence in corporate governance. This curious hypocrisy is most recently revealed in the all-out effort by the business lobby to limit shareholder “say on pay” or the right to nominate a competing slate of directors.

For too many corporations, “maximizing shareholder value” has also provided justification
for bamboozling customers, squeezing employees, avoiding taxes, and leaving communities in the lurch. For any one profit--maximizing company, such ruthless behavior may be perfectly rational. But when competition forces all companies to behave in this fashion, neither they, nor we, wind up better off.

Take the simple example of outsourcing production to lower-cost countries overseas. Certainly it makes sense for any one company to aggressively pursue such a strategy. But
if every company does it, these companies may eventually find that so many American consumers have suffered job loss and wage cuts that they no longer can buy the goods they are producing, even at the cheaper prices. The companies may also find that government no longer has sufficient revenue to educate its remaining American workers or dredge the ports through which its imported goods are delivered to market.

Economists have a name for such unintended spillover effects—negative externalities—and normally the most effective response is some form of government action, such as regulation, taxes, or income transfers. But one of the hallmarks of the current political environment is that every tax, every regulation, and every new safety-net program is bitterly opposed by the corporate lobby as an assault on profits and job creation. Not only must the corporation commit to putting shareholders first—as they see it, the society must as well. And with the Supreme Court’s decision in Citizens United, corporations are now free to spend unlimited sums of money on political campaigns to elect politicians sympathetic to this view.

Perhaps the most ridiculous aspect of shareholder--über-alles is how at odds it is with every modern theory about managing people. David Langstaff, then–chief executive of TASC, a Virginia--based government-contracting firm, put it this way in a recent speech at a conference hosted by the Aspen Institute and the business school at Northwestern University: “If you are the sole proprietor of a business, do you think that you can motivate your employees for maximum performance by encouraging them simply to make more money for you?” Langstaff asked rhetorically. “That is effectively what an enterprise is saying when it states that its purpose is to maximize profit for its investors.”

Indeed, a number of economists have been trying to figure out the cause of the recent slowdown in both the pace of innovation and the growth in worker productivity. There are lots of possible culprits, but surely one candidate is that American workers have come to understand that whatever financial benefit may result from their ingenuity or increased efficiency is almost certain to be captured by shareholders and top executives.

The new focus on shareholders also hasn’t been a big winner with the public. Gallup polls show that people’s trust in and respect for big corporations have been on a long, slow decline in recent decades—at the moment, only Congress and health-maintenance organizations rank lower. When was the last time you saw a corporate chief executive lionized on the cover of a newsweekly? Odds are it was probably the late Steve Jobs of Apple, who wound up creating more wealth for more shareholders than anyone on the planet by putting shareholders near the bottom of his priority list.

 

RISING DOUBTS ABOUT SHAREHOLDER PRIMACY

The usual defense you hear of “maximizing shareholder value” from corporate chief executives is that at many firms—not theirs!—it has been poorly understood and badly executed. These executives make clear they don’t confuse today’s stock price or this quarter’s earnings with shareholder value, which they understand to be profitability and stock appreciation over the long term. They are also quick to acknowledge that no enterprise can maximize long-term value for its shareholders without attracting great employees, providing great products and services to customers, and helping to support efficient governments and healthy communities.

Even Michael Jensen has felt the need to reformulate his thinking. In a 2001 paper, he wrote, “A firm cannot maximize value if it ignores the interest of its stakeholders.” He offered a proposal he called “enlightened stakeholder theory,” one that “accepts maximization of the long run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders.”

But if optimizing shareholder value implicitly requires firms to take good care of customers, employees, and communities, then by the same logic you could argue that optimizing customer satisfaction would require firms to take good care of employees, communities, and shareholders. More broadly, optimizing any function inevitably requires the same tradeoffs or messy balancing of interests that executives of an earlier era claimed to have done.

The late, great management guru Peter Drucker long argued that if one stakeholder group should be first among equals, surely it should be the customer. “The purpose of business is to create and keep a customer,” he famously wrote.

Roger Martin picked up on Drucker’s theme in “Fixing the Game,” his book-length critique of shareholder value. Martin cites the experience of companies such as Apple, Johnson & Johnson, and Proctor & Gamble, companies that put customers first, and whose long-term shareholders have consistently done better than those of companies that claim to put shareholders first. The reason, Martin says, is that customer focus minimizes undue risk taking, maximizes reinvestment, and creates, over the long run, a larger pie.

Having spoken with more than a few top executives over the years, I can tell you that many would be thrilled if they could focus on customers rather than shareholders. In private, they chafe under the quarterly earnings regime forced on them by asset managers and the financial press. They fear and loathe “activist” investors. They are disheartened by their low public esteem. Few, however, have dared to challenge the shareholder-first ideology in public.

But recently, some cracks have appeared.

In 2006, Ian Davis, then–managing director of McKinsey, gave a lecture at the University of Pennsylvania’s Wharton School in which he declared, “Maximization of shareholder value is in danger of becoming irrelevant.”

Davis’s point was that global corporations have to operate not just in the United States but in the rest of the world where people either don’t understand the concept of putting shareholders first or explicitly reject it—and companies that trumpet it will almost surely draw the attention of hostile regulators and politicians.

“Big businesses have to be forthright in saying what their role is in society, and they will never do it by saying, ‘We maximize shareholder value.’”

A few years later, Jack Welch, the former chief executive of General Electric, made headlines when he told the Financial Times, “On the face of it, shareholder value is the dumbest idea in the world.” What he meant, he scrambled to explain a few days later, is that shareholder value is an outcome, not a strategy. But coming from the corporate executive (“Neutron Jack”) who had embodied ruthlessness in the pursuit of competitive dominance, his comment was viewed as a recognition that the single-minded pursuit of shareholder value had gone too far. “That’s not a strategy that helps you know what to do when you come to work every day,” Welch told Bloomberg Businessweek. “It doesn’t energize or motivate anyone. So basically my point is, increasing the value of your company in both the short and long term is an outcome of the implementation of successful strategies.”

Tom Rollins, the founder of the Teaching Company, offers as an alternative what he calls the “CEO” strategy, standing for customers, employees, and owners. Rollins starts by noting that at the foundation of all microeconomics are voluntary trades or exchanges that create “surplus” for both buyer and seller that in most cases exceed their minimum expectations. The same logic, he argues, ought to apply to the transactions between a company and its employees, customers, and owners/shareholders.

The problem with a shareholder-first strategy, Rollins argues, is that it ignores this basic tenet of economics. It views any surplus earned by employees and customers as both unnecessary and costly. After all, if the market would allow the firm to hire employees for 10 percent less, or charge customers 10 percent more, then by not driving the hardest possible bargain with employees and customers, shareholder profit is not maximized.

But behavioral research into the importance of “reciprocity” in social relationships strongly suggests that if employees and customers believe they are not getting any surplus from a transaction, they are unlikely to want to continue to engage in additional transactions with the firm. Other studies show that having highly satisfied customers and highly engaged employees leads directly to higher profits. As Rollins sees it, if firms provide above-market returns—surplus—to customers and employees, then customers and employees are likely to reciprocate and provide surplus value to firms and their owners.

Harvard Business School professor Michael Porter and Kennedy School senior fellow Mark Kramer have also rejected the false choice between a company’s social and value--maximizing responsibilities that is implicit in the shareholder-value model. “The solution lies in the principle of shared value, which involves creating economic value
in a way that also creates value for society by addressing its needs and challenges,” they wrote in the Harvard Business Review in 2011. In the past, economists have theorized that
for profit-maximizing companies to provide societal benefits, they had to sacrifice economic success by adding to their costs or forgoing revenue. What they overlooked, Porter and Kramer wrote, was that by ignoring social goals—safe workplaces, clean environments, effective school systems, adequate infrastructure—companies wound up adding to their overall costs while failing to exploit profitable business opportunities. “Businesses must reconnect company success with social progress,” Porter and Kramer wrote. “Shared value is not social responsibility, philanthropy or even sustainability, but a new way to achieve economic success. It is not on the margin of what companies do, but at the center.”

 

SMALL STEPS TOWARD A MORE BALANCED CAPITALISM

If it were simply the law that was responsible for the undue focus on shareholder value, it would be relatively easy to alter it. Changing a behavioral norm, however—particularly one so accepted and reinforced by so much supporting infrastructure—is a tougher challenge. The process will, of necessity, be gradual, requiring carrots as well as sticks. The goal should not be to impose a different focus for corporate decision-making as inflexible as maximizing shareholder value has become but rather to make it acceptable for executives and directors to experiment with and adopt a variety of goals and purposes.

Companies would surely be responsive if investors and money managers would make clear that they have a longer time horizon or are looking for more than purely bottom-line results. There has long been a small universe of “socially responsible” investing made up of mutual funds, public and union pension funds, and research organizations that monitor corporate behavior and publish scorecards based on an assessment of how companies treat customers, workers, the environment, and their communities. While some socially responsible funds and asset managers and investors have consistently achieved returns comparable or even slightly superior to those of competitors focused strictly on financial returns, there is no evidence of any systematic advantage. Nor has there been a large hedge fund or private-equity fund that made it to the top with a socially responsible investment strategy. You can do well by doing good, but it’s no sure thing that you’ll do better.

Nineteen states—the latest is Delaware, where a million businesses are legally registered—have recently established a new kind of corporate charter, the “benefit corporation,” that explicitly commits companies to be managed for the benefit of all stakeholders. About 550 companies, including Patagonia and Seventh Generation, now have B charters, while 960 have been certified as meeting the standards set out by the nonprofit B Lab. Although almost all of today’s B corps are privately held, supporters of the concept hope that a number of sizable firms will become B corps and that their stocks will then be traded on a separate exchange.

One big challenge facing B corps and the socially responsible investment community is
that the criteria they use to assess corporate behavior exhibit an unmistakable liberal bias that makes it easy for many investors, money managers, and executives to dismiss them
as ideological and naïve. Even a company run for the benefit of multiple stakeholders will at various points be forced to make tough choices, such as reducing payroll, trimming costs, closing facilities, switching suppliers, or doing business in places where corruption is rampant or environmental regulations are weak. As chief executives are quick to remind, companies that ignore short-term profitability run the risk of never making it to the long term.

Among the growing chorus of critics of “shareholder value,” a consensus is emerging around a number of relatively modest changes in tax and corporate governance laws that, at a minimum, could help lengthen time horizons of corporate decision-making. A group of business leaders assembled by the Aspen Institute to address the problem of “short-termism” recommended a recalibration of the capital-gains tax to provide investors with lower tax rates for longer-term investments. A small transaction tax, such as the one proposed by the European Union, could also be used to dampen the volume and importance of short-term trading.

The financial-services industry and some academics have argued that such measures, by reducing market liquidity, will inevitably increase the cost of capital and result in markets that are more volatile, not less. A lower tax rate for long-term investing has also been shown to have a “lock-in” effect that discourages investors from moving capital to companies offering the prospect of the highest return. But such conclusions are implicitly based on the questionable assumption that markets without such tax incentives are otherwise rational and operate with perfect efficiency. They also beg fundamental questions about the role played by financial markets in the broader economy. Once you assume, as they do, that the sole purpose of financial markets is to channel capital into investments that earn the highest financial return to private investors, then maximizing shareholder value becomes the only logical corporate strategy.

There is also a lively debate on the question of whether companies should offer earnings guidance to investors and analysts—estimates of what earnings per share will be for the coming quarter. The argument against such guidance is that it reinforces the undue focus of both executives and investors on short-term earnings results, discouraging long-term investment and incentivizing earnings manipulation. The counterargument is that even in the absence of company guidance, investors and executives inevitably play the same game by fixating on the “consensus” earnings estimates of Wall Street analysts. Given that reality, they argue, isn’t it better that those analyst estimates are informed as much as possible by information provided by the companies themselves?

In weighing these conflicting arguments, the Aspen group concluded that investors and analysts would be better served if companies provided information on a wider range of metrics with which to assess and predict business performance over a longer time horizon than the next quarter. While it might take Wall Street and its analysts some time to adjust to this richer and more nuanced form of communication, it would give the markets a better understanding of what drives each business while taking some of the focus off the quarterly numbers game.

In addressing the question of which shareholders should have the most say over company strategies and objectives, there have been suggestions for giving long-term investors greater power in selecting directors, approving mergers and asset sales, and setting executive compensation. The idea has been championed by McKinsey & Company managing director Dominic Barton and John Bogle, the former chief executive of the Vanguard Group, and is under active consideration by European securities regulators. Such enhanced voting rights, however, would have to be carefully structured so that they encourage a sense of stewardship on the part of long-term investors without giving company insiders or a few large shareholders the opportunity to run roughshod over other shareholders.

The short-term focus of corporate executives and directors is heavily reinforced by the demands of asset managers at mutual funds, pension funds, hedge funds, and endowments, who are evaluated and compensated on the basis of the returns they generated over the last year and the last quarter. Even while most big companies have now taken steps to stretch out over several years the incentive pay plans of top corporate executives to encourage those executives to take a longer-term perspective, the outsize quarterly and annual bonuses on Wall Street keep the economy’s time horizons fixated on the short term. At a minimum, federal regulators could require asset managers to disclose how their compensation is determined. They might also require funds to justify, on the basis of actual performance, the use of short-term metrics when managing long-term money such as pensions and college endowments.

The Securities and Exchange Commission also could nudge companies to put greater emphasis on long-term strategy and performance in their communications with shareholders. For starters, companies could be required to state explicitly in their annual reports whether their priority is to maximize shareholder value or to balance shareholder interests with other interests in some fashion—certainly shareholders deserve to know that in advance. The commission might require companies to annually disclose the size of their workforce in each country and information on the pay and working conditions of the company’s employees and those of its major contractors. Disclosure of any major shifts in where work is done could also be required, along with the rationale. There could be a requirement for companies to perform and disclose regular environmental audits and to acknowledge other potential threats to their reputation and brand equity. In proxy statements, public companies could be required to explain the ways in which executive compensation is aligned with long-term strategy and performance.

If I had to guess, however, my hunch would be that employees, not outside investors and regulators, will finally free the corporate sector from the straitjacket
of shareholder value. Today, young people—particularly those with high-demand skills—are drawn to work that doesn’t simply pay well but also has meaning and social value. You can already see that reflected in what students are choosing to study and where highly sought graduates are choosing to work. As the economy improves and the baby-boom generation retires, companies that have reputations as ruthless maximizers of short-term profits will find themselves on the losing end of the global competition for talent.

In an era of plentiful capital, it will be skills, knowledge, and creativity that will be in short supply, with those who have them calling the corporate tune. Who knows? In the future, there might even be conferences at which hedge-fund managers and chief executives get together to gripe about the tyranny of “maximizing employee satisfaction” and vow to do something about it.

Original Story

Pages

Subscribe to RSS - The American Prospect