Thứ Tư, 20 tháng 4, 2016

Build beautifully for Android Wear’s Round Screen using API 23’s -round identifier

Posted by Hoi Lam, Android Wear Developer Advocate



Android Wear is about choice. From the beginning, users could choose the style they wanted, including watches with circular screens. With Android Wear API 23, we have enabled even better developer support so that you can code delightful experiences backed by beautiful code. The key component of this is the new round resource identifier which helps you separate resource files such as layouts, dimens between round and square devices. In this blog post, I will lay out the options that developers have and why you should consider dimens.xml! In addition, I will outline how best to deal with devices which have a chin.



Getting started? Consider BoxInsetLayout!


If all your content can fit into a single square screen, use the BoxInsetLayout. This class has been included in the Wearable Support Library from the start and helps you put all the content into the middle square area of the circular screen and is ignored by square screens. For details on how to use the BoxInsetLayout, refer to the Use a Shape-Aware Layout section in our developer guide.












Without BoxInsetLayout
With BoxInsetLayout




Goodbye WatchViewStub, Hello layout-round!


Developers have been able to specify different layouts for square and round watches using WatchViewStub from the beginning. With Android Wear API 23, this has become even easier. Developers can put different layouts into layout-round and layout folders. Previously with WatchViewStub, developers needed to wait until the layout was inflated before attaching view elements, this added significant complexity to the code. This is eliminated using the -round identifier:




 Pre Android Wear API 23 - WatchViewStub (4 files)





1. layout/activity_main.xml

 <?xml version="1.0" encoding="utf-8"?>  
 
<android.support.wearable.view.WatchViewStub    
     xmlns
:android="http://schemas.android.com/apk/res/android"  
     xmlns
:app="http://schemas.android.com/apk/res-auto"  
     xmlns
:tools="http://schemas.android.com/tools"  
     android
:id="@+id/watch_view_stub"  
     android
:layout_width="match_parent"  
     android
:layout_height="match_parent"  
     app
:rectLayout="@layout/rect_activity_main"  
     app
:roundLayout="@layout/round_activity_main"  
     tools
:context="com.android.example.watchviewstub.MainActivity"  
     tools
:deviceIds="wear"></android.support.wearable.view.WatchViewStub>

2. layout/rect_activity_main.xml - layout for square watches


3. layout/round_activity_main.xml - layout for round watches


4. MainAcitivity.java
  
 
protected void onCreate(Bundle savedInstanceState) {  
   
super.onCreate(savedInstanceState);  
   setContentView
(R.layout.activity_main);  
   
final WatchViewStub stub = (WatchViewStub) findViewById(R.id.watch_view_stub);  
   stub
.setOnLayoutInflatedListener(new WatchViewStub.OnLayoutInflatedListener() {  
     
@Override  
     
public void onLayoutInflated(WatchViewStub stub) {  
       mTextView
= (TextView) stub.findViewById(R.id.text);  
     
}  
   
});  
 
}  


 After Android Wear API 23 - layout-round (3 files)




1. layout-notround/activity_main.xml - layout for square watches


2. layout-round/activity_main.xml - layout for round watches


3. MainAcitivity.java

 protected void onCreate(Bundle savedInstanceState) {  
     
super.onCreate(savedInstanceState);  
     setContentView
(R.layout.activity_main);  
     mTextView
= (TextView) findViewById(R.id.text);    
 
}  



That said, since WatchViewStub is part of the Android Wear Support Library, if your current code uses it, it is not a breaking change and you can refactor your code at your convenience. In addition to the -round identifier, developers also use the -notround idenifier to separate resources. So why would you want to use it in place of the default layout? It’s a matter of style. If you have a mixture of layouts, you might consider organising layouts in this way:



  • layout/ - Layouts which works for both circular and square watches

  • layout-round/ and layout-notround/ - Screen shape specific layouts



An even better way to develop for round - values-round/dimens.xml


Maintaining multiple layout files is potentially painful. Each time you add a screen element, you need to go to all the layout files and add this. With mobile devices, you will usually only do this to specify different layouts for phones and tablets and rarely for different phone resolutions. For watches, unless your screen layout is significantly different between round and square (which is rare based on the applications I have seen thus far), you should consider using different dimens.xml instead.



As I experimented with the -round identifier, I found that the easiest way to build for round and square watches is actually by specifying values/dimens.xml and values-round/dimens.xml. By specifying different padding settings, I am able to create the following layout with the same layout.xml file and two dimens files - one for square and one for round. The values used suits this layout, you should experiment with different values to see what works best:















values-round/dimens.xml values/dimens.xml

 <dimen name="header_start_padding">36dp</dimen>  
 
<dimen name="header_end_padding">22dp</dimen>  
 
<dimen name="list_start_padding">36dp</dimen>  
 
<dimen name="list_end_padding">22dp</dimen>  

 <dimen name="header_start_padding">16dp</dimen>  
 
<dimen name="header_end_padding">16dp</dimen>  
 
<dimen name="list_start_padding">10dp</dimen>  
 
<dimen name="list_end_padding">10dp</dimen>  




Before API 23, to do the same would have involved a significant amount of boilerplate code manually specifying the different dimensions for all elements on the screen. With the -round identifier, this is now easy to do in API 23 and is my favourite way to build round / square layouts.



Don’t forget the chin!


Some watches have an inset (also know as a “chin”) in an otherwise circular screen. So how should you can you build a beautiful layout while keeping your code elegant? Consider this design:







activity_main.xml


 <FrameLayout  
   
...>  
   
<android.support.wearable.view.CircledImageView  
     android
:id="@+id/androidbtn"  
     android
:src="@drawable/ic_android"  
     
.../>  
   
<ImageButton  
     android
:id="@+id/lovebtn"  
     android
:src="@drawable/ic_favourite"  
     android
:paddingTop="5dp"  
     android
:paddingBottom="5dp"  
     android
:layout_gravity="bottom"  
     
.../>  
 
</FrameLayout>  




If we are to do nothing, the heart shape button will disappear into the chin. Luckily, there’s an easy way to fix this with fitsSystemWindows:



 <ImageButton  
     
android:id="@+id/lovebtn"  
     
android:src="@drawable/ic_favourite"  
     
android:paddingTop="5dp"  
     
android:paddingBottom="5dp"  
   
android:fitsSystemWindows="true"  
     ...
/>  


For the eagle-eyed (middle image of the screen shown below under “fitsSystemWindows=”true””), you might noticed that the top and bottom padding that we have put in is lost. This is one of the main side effect of using fitsSystemWindows. This is because fitsSystemWindows works by overriding the padding to make it fits the system window. So how do we fix this? We can replace padding with InsetDrawables:



inset_favourite.xml



 <inset  
   
xmlns:android="http://schemas.android.com/apk/res/android"  
   
android:drawable="@drawable/ic_favourite"
   
android:insetTop="5dp"  
   
android:insetBottom="5dp"
/>  


activity_main.xml



 <ImageButton  
     
android:id="@+id/lovebtn"  
     
android:src="@drawable/inset_favourite"  
     
android:paddingTop="5dp"  
     
android:paddingBottom="5dp"  
     
android:fitsSystemWindows="true"  
     ...
/>  



Although the padding setting in the layout will be ignored, the code is tidier if we remove this redundant code.














Do nothing
fitsSystemWindows=”true”
fitsSystemWindows=”true”
and use InsetDrawable



If you require more control than what is possible declaratively using xml, you can also programmatically adjust your layout. To obtain the size of the chin you should attach a View.OnApplyWindowInsetsListener to the outermost view of your layout. Also don’t forget to call v.onApplyWindowInsets(insets). Otherwise, the new listener will consume the inset and inner elements which react to insets may not react.



How to obtain the screen chin size programmatically


MainActivity.java



 private int mChinSize;  
 
protected void onCreate(Bundle savedInstanceState) {  
     
super.onCreate(savedInstanceState);  
     setContentView
(R.layout.activity_main);  
     
// find the outermost element  
     
final View container = findViewById(R.id.outer_container);  
     
// attach a View.OnApplyWindowInsetsListener  
     
container.setOnApplyWindowInsetsListener(new View.OnApplyWindowInsetsListener() {  
         
@Override  
         
public WindowInsets onApplyWindowInsets(View v, WindowInsets insets) {  
             mChinSize
= insets.getSystemWindowInsetBottom();  
           
 // The following line is important for inner elements which react to insets  
             v.onApplyWindowInsets(insets);

             
return insets;  
     
}  
   
});  
 
}  


Last but not least, remember to test your code! Since last year, we have included several device images for Android Wear devices with a chin to make testing easier and faster:





Square peg in a round hole no more!


Android Wear has always been about empowering users to wear what they want. A major part in enabling this is the round screen. With API 23 and the -round resource identifier, it is easier than ever to build for both round and square watches - delightful experiences backed by beautiful code!



Additional Resources


Why would I want to fitsSystemWindows? by Ian Lake - Best practice for using this powerful tool including its limitations.
ScreenInfo Utility by Wayne Piekarski - Get useful information for your display including DPI, chin size, etc.

Thứ Ba, 19 tháng 4, 2016

Chari and Kehoe on Bailouts

V. V. Chari and Pat Kehoe have a very nice article on bank reform, "A Proposal to Eliminate the Distortions Caused by Bailouts," backed up by a serious academic paper.

Their bottom line proposal is a limit on debt to equity ratios, rising with size. This is, I think, a close cousin to my view that a Pigouvian tax on debt could substitute for much of our regulation.

Banks pose a classic moral hazard problem. In a financial crisis, governments are tempted to bail out bank creditors. Knowing they will do so, bankers take too much risk and people lend to too risky banks. The riskier the bank, the stronger the governments' temptation to bail it out ex-post.

Chari and Pat write with a beautifully disciplined economic perspective: Don't argue about transfers, as rhetorically and politically effective as that might be, but identify the distortion and the resulting inefficiency. Who cares about bailouts? Well, taxpayers obviously. But economists shouldn't worry primarily about this as a transfer. The economic problem is the distortion that higher tax rates impose on the economy. Second, there is a subsidy distortion that bailed out firms and creditors expand at the expense of other, more profitable activities. Third there is a debt and size distortion. Since debt is bailed out but not equity, we get more debt, and the banks who can get bailouts become inefficiently large.
For sake of argument, I think, Chari and Pat take a benign view of orderly resolution and living wills. Their point is that even this is not enough. Though functioning resolution would solve the tax distortion and subsidy distortion, the debt-size externality remains.
The extent of regulator intervention depends on the aggregate losses due to threatened bankruptcies. Individual firms do not internalize the effect of their decisions on aggregate outcomes and, therefore, on the extent of such intervention. Just as with bailouts, individual firms have incentives to become too large relative to the sustainably efficient outcome 
Their alternative: A regulatory system that
limits the debt-equity ratio of financial firms and imposes a Pigouvian tax on the size of these firms.
The paper is not specific beyond this suggestion. It's intriguing for many reasons outside the paper.

First, they limit the ratio of debt to equity, not the ratio of debt to assets. Current bank regulation is centered on the ratio of debt to assets, but then we get in to the mess of measuring risk-weighted assets, many of them at book value.  Abandoning this whole mess is a great idea.

Thinking about some of the same issues, I came to the conclusion that a simple Pigouvian tax on debt would work better than current debt-to-asset regulations. If you borrow $1 (especially short) you pay an 5 cent tax per year.

There is an interesting question then whether this tax on debt or a regulatory debt-to-equity ratio limit will work better.

Chari and Pat don't say what the optimal debt/equity ratio should be, and how that should be enforced dynamically. If up against the limit, do they want banks to sell assets ("Fire sales" and "liquidity spirals" banks will complain), to issue equity ("agency costs", banks will complain) or what?  Chari and Pat also don't say whether they want regulators to target the ratio of debt to book value of equity or to market value of equity. I like market value, further avoiding accounting shenanigans. I suspect the regulatory community will choose book value, so inure themselves from responding to market signals.

I like announcing a price rather than a quantity -- a Pigouvian tax on debt rather than a debt-equity ratio -- as it avoids the whole argument, and the just this side vs. just that side of any cliff.   My tax could rise with size, to address their size externality as well.

But they don't analyze the idea of a tax on debt rather than their ratio, so perhaps both would work as well within their model. Their ratio of debt to equity is sufficient for their ends, but perhaps not necessary.

Chari and Pat take a benign view of debt, and the functioning of resolution authority: They
start from the perspective that because debt contracts are widespread, they must be privately valuable and, in all likelihood, also valuable to society in general.
They also posit that "orderly resolution" authority will in fact swiftly impose losses on creditors, and that by using "living wills" the offending banks can be quickly broken up.

I think they make these assumptions to focus on one issue. That's good for an academic paper. But in contemplating a larger regulatory scheme, I think we should question both assumptions.

In a modern economy, liquidity need not require fixed value, and I think we could get by with a lot less debt.  That leads me to much more capital overall. They implicitly head this way,  presuming that debt is vital, but then advocating debt equity ratio regulations that will presumably mean a lot more equity.

I suspect that resolution authorities, hearing screaming on the phone from large financial institution creditors of a troubled bank,  and with "systemic" and "contagion" in mind, will swiftly bail out creditors once again.  I think that a bank too complex to go through bankruptcy, even a reformed bankruptcy code, is hopeless for the poor Treasury secretary to carve up in a weekend. So another reason for more equity is to avoid this system that will not work, as well as to patch up its remaining limitations even if it works perfectly.

Chari and Pat also step outside the model, stating that the resolution authority
is worrisome because by giving extraordinary powers to regulators, it allows them to rewrite private contracts between borrowers and creditors...[this]... can do great harm to the well-being of their citizens. Societies prosper when citizens are confident that contracts they enter will be enforced
Their closing sentence is important
We emphasize that regulation is needed in our framework not because markets on their own lead to inefficient outcomes, but because well-meaning governments that lack commitment introduce distortions and externalities that need to be corrected.

Thứ Bảy, 16 tháng 4, 2016

A better living will


"US rejects 'living wills' of 5 banks," from FTWSJ puts this event in the larger story of Dodd Frank unraveling. Juicy quotes:
WSJ: “living wills,” ... are supposed to show in detail how these banking titans, in the event of failure, could be placed into bankruptcy without wrecking the financial system.

FT:...the shortcomings varied by bank but included flawed computer models; inadequate estimates of liquidity needs; questionable assumptions about the capital required to be wound up; and unacceptable judgments on when to enter banktruptcy.

FT: David Hirschmann of the US Chamber of Commerce, the biggest business lobby, said the living wills process was “broken”. “When you can’t comply no matter how much money you put into legitimately trying to comply, maybe it’s time to ask: did we get the test wrong?” he said.

WSJ: Six years after the law was passed, and eight years since the financial crisis, regulators given broad authority to remake American finance, with thousands of regulatory officials on their payroll, cannot figure out a system to allow financial giants to fail, even in theory. What are we paying these people for?
It seems like a good moment to revisit an idea buried deep in "Toward a run-free financial system."  How could we structure banks to fail transparently?


Picture of bank structure

Recall, here is how banks are structured now (extremely simplified). Banks hold assets like loans, mortgages and securities. Banks get money to fund these assets by selling a tiny amount of equity, i.e. stock, and by a huge amount of borrowing, including deposits, long-term bonds, and short-term debt.

The trouble with this system is, if the value of the assets falls by more than $10 in my example, the equity is wiped out, and the bank can't pay its debts. If short-term debt holders worry about this event, they all clamor to get paid first, so a run can happen. That's not really a problem either; bankruptcy is set up exactly to handle this situation. The creditors who lent money to the bank split up the assets. Yes, they don't get their full money back, but if you lend to a bank that's leveraged like this, that's the risk you take.

The trouble is the widespread feeling that big banks are too big, too complex, too illiquid, to utterly muddy, to carve up this way. If it takes years in court, and if all the value of the assets is drained away by lawyers, you have a real problem. Furthermore, we often want the profitable parts of the bank to remain in operation while the creditors squabble about assets. (Ben Bernanke's classic paper on banking in the great depression makes this point beautifully.) The ATM machines should not go dark, the offices where people know their customers and can keep things going should stay in operation.

Hence, big banks become too big -- or too something -- to fail. In that situation, the government is mighty tempted to bail out the creditors and keep the thing limping along. Given that temptation, a lot of large, politically well connected creditors also scream that there will be ``systemic dangers'' if they don't get their cash now, adding to the bailout pressure. A "living will" is supposed to stop this chain, by allowing  bank assets to very quickly get divvied up among creditors.

But the large banks are, apparently, so large and complex that nobody can figure out a living will. That's debateable, for example Kenneth Scott and John Taylor argue bankruptcy can work.  But let's go with the idea. Is there an alternative to Bernie Sanders' bust up the banks? Here's one.

picture of altered bank structure that is easy to resolve


Starting from the left, suppose the bank holds all the same assets it does today. But, it issues 100% equity to finance its assets. Now, a 100% equity financed bank cannot fail. If you don't have any debt, you can't fail to pay debts. Yes, the bank can lose money and slowly go out of business. But it cannot go bankrupt. As it loses money, the value of its equity declines, until shareholders get mad and liquidate the carcass. Nobody can run to get their money out ahead of the other person. End of bankruptcy, end of bank runs, end of financial crises.

(Technical note. Yes, that's a bit overstated. A bank can potentially invest in derivatives and other securities where it can lose more than all of the investment. The amount of monitoring needed to make sure this doesn't happen is trivial next to the Basel sort of thing required to make sure a bank never loses more than a few percent of its value.)

OK, gulp, you say. But don't people "need" to have bank accounts? Isn't "transformation" of debt into loans the crucial feature of the financial system? Don't equity holders "require" high risk, high-return stock? No, argues the "run-free financial system" essay. But let's not go there. Let's just restructure things so that the bank can hold exactly the same assets it has today, and its investors can hold exactly the same assets they hold today.

So, moving to the right in my little picture, suppose bank stock is held in a mutual fund, exchange traded fund, or a special-purpose "bank." Bank stock is the only asset these companies hold, and that stock is also traded on exchanges. These banks fund themselves by the same mix of debt, equity, deposits, and heck even overnight wholesale debt, commercial paper, and so forth.

Now, if the value of the bank stock falls, these holding companies fail, just as my original bank failed. But there is a huge difference. You can resolve the holding company in a morning and still make it to play golf in the afternoon.  The only asset is common stock, commonly traded! There are no derivatives positions to unwind, no strange positions in offshore investment trusts, or whatever.  The "living will" simply specifies how much common equity each debtholder gets in the event of bankruptcy. There is never any need to break up, liquidate, assess, or transfer bits and pieces of the big bank.

Furthermore, there is no more obscurity over the value of  the holding company assets. We see the value of bank assets, marked to market, on a millisecond basis.

The holding companies can provide all the retail deposit services banks now provide. In fact, they could contract out to the banks to provide those on a fee basis, so the customer might not even need to know.

In addition, any sane holding company would hold the stock of several banks, diversifying the risk, and thus reducing the chances of ever needing to be wound up. Come to think of it, any sane holding company would also diversify out of banking, but now we're back to my larger vision of equity-financed banking and sensible small changes in financial structure to achieve it.

In the meantime, there you have it. 100% equity financed banks can still give bank creditors exactly the same assets they hold today, and allow failures of those debts to be resolved in a morning.








Thứ Tư, 13 tháng 4, 2016

MetLife

What does "systemically important" mean? How can an institution, per se, be "systemically important?"  The WSJ coverage of Judge Rosemary Collyer’s decision rescinding MetLife’s designation as a "systemically important financial institution:" gives an interesting clue to how our regulators' thinking is evolving on this issue:
The [Financial Stability Oversight] council argued — bromide alert — that “contagion can result when relatively modest direct, individual losses cause financial institutions with widely dispersed exposures to actively manage their balance sheets in a way that destabilizes markets.”
It's not a bromide. It is a revealing capsule of how the FSOC headed by Treasury thinks about this issue.


"Actively manage balance sheets" is a fancy word for "sell assets." So there you have it. "Systemically important" now just means that an institution might sell assets, because selling assets might lower asset prices. "Contagion" and "systemically important" are no longer about runs; you see one bank in trouble and go take your money out of a different one. "Contagion" and "systemically important"  is no longer the (false, but plausible) domino theory, that if I default and owe you money, you default.

Policy is no longer just about stopping runs. Policy is not just about stopping any large bank from failing, or ever just losing money. Policy is about  stopping asset prices from falling, and stopping even the small marginal additional fall in prices that might accompany one  large institution's sales.  (Except that leverage and capital ratios now force institutions to sell even if they don't want to, a delicious case of contradictory regulatory commands.)

Owen Lamont's classic characterizatiion of policy-maker's attitude toward selling short, now applies to selling at all.
 Policymakers and the general public seem to have an instinctive reaction that short selling is morally wrong. Short selling has been characterized as inhuman, un-American, and against God
The journal nails the basic problem
For eight years, federal regulators have failed to define precisely the “systemic risks” they claim they can identify across the financial landscape.
But no definition makes it easy to endlessly expand the word's meaning.

Chủ Nhật, 10 tháng 4, 2016

NBER AP

On Friday I attended the NBER Asset Pricing meeting (program here) in Chicago, organized by Adrien Verdelhan and Debby Lucas. The papers were unusually interesting, even by the high standards of this meeting. Alas the NBER doesn't post slides so I don't have great visuals to show you.


Lars Hansen started with the latest in the Hansen-Sargent ambiguity / robustness work,Sets of Models and Prices of Uncertainty. Stavros Panageas gave a beautiful discussion,  complete with power point animations. He characterized the paper as a major advance, for reducing the range of models over which an ambiguous agent looks for the worst case scenario, and for making that range state-dependent.

In the application, the agent worries that the mean growth rate of consumption and the AR(1) coefficient might be wrong; a more persistent consumption growth process is hurtful, and that pain is more in bad times.

I haven't followed this work closely enough. I still wonder what the testable implcations are -- how different is the asset pricing model from one in which the true consumption growth process is just a bit different from our estimate, in the worst possible way?

Still, it's nice to see a Nobel Prize winner leading off a conference, and with easily the most technical paper at that conference, with another one (Rob Engle) in the audience. That tells you something about the seriousness of this group. Also, this is serious behavioral finance by any metric -- a disciplined model of probability misperceptions, which is nice to see.

Robert Novy-Marx presented  Testing Strategies Based on Multiple Signals, discussed by Moto Yogo. We're all familiar with the phenomenon that if you try 10 characteristics and pick the best few to forecast returns, t statistics are biased and performance falls out of sample.

Robert pointed out that if you put those best 3 in a portfolio, they diversify each other, reducing the in-sample variance of the portfolio, and boosting Sharpe ratios and t-statistics even further.

Many ``smart beta'' funds are doing this, so the fall-off in performance from backtest to real money is relevant beyond academia.

The extent of this bias is impressive. Here is the distribution of t statistics that results when you pick the best three of 20 completely useless signals, and put them in a portfolio. Critical values of 4 and 5 show up routinely in Robert's calculations.

Laura Veldkamp presented her work with Nina Boyarchenko, David Lucca, and Laura Veldkamp,  Taking Orders and Taking Notes: Dealer Information Sharing in Financial Markets. Discussed ably (of course) by Darrell Duffie. Is it a problem that the dealers who are the prime bidders at treasury auctions have been caught talking to each other ahead of the auction?  Surprisingly, no: The Treasury can come out ahead when dealers share information with each other, and investors can potentially come out ahead too.

This warms my contrarian economist heart. We know so little about how markets work, and regulators are so quick to jump on supposedly bad behavor, it's lovely to see a clear and convincing model, that explains the kind of second-order and equilibrium effects that economists are good at.

Brian Weller presented Measuring Tail Risks at High Frequency, discussed nicely by Mike Chernov. Brian's basic idea is to run cross-sectional regressions of bid/ask spreads, normalized by volume and depth, on the cross-section of factor betas. Since spreads are larger when dealers are more worried about big jumps, this produces a measure of time-varying probability x size of such jumps. The measure correlates well with the VIX.

Michael Bauer presented his paper with Jim Hamilton Robust Bond Risk Premia discussed very nicely by Greg Duffee. (My discussion of a previous presentation). This paper is really about whether macro variables help to forecast bond returns. We're used to "Stambaugh bias:'' if you forecast returns with a persistent regressor, and the innovation in the regressor is strongly negatively correlated with the innovation in the return, then the near-unit-root downward bias in the regressor autocorrelation seeps over into upward bias of return predictability. But macro variables forecasting bond returns have innovations nearly uncorrelated with the returns, so that's not much of a problem. Michael and Jim show another problem: with overlappping returns, t statistics can be biased down too.

This led to a pleasant reassessment of bond return forecasts. Some points that came up: econometrics aside, many return forecasters don't do well out of sample. Many of the issues are specification issues orthogonal to this econometric point. For example, evaluating the huge forecastability of bond returns from a combination of level and inflation documented by Anna Cieslak and Pavol Povala, where the forecasters look a lot like a trend, is really about specification and interpretation, not econometrics. I held out the view that the important part of my paper with Monika Piazzesi is the single-factor structure of expected returns, not whether small principal components help to forecast returns. We had a pleasant interchange on whether it's a good or terrible idea to run one-year horizon forecasting regressions. I like them, because they attenuate measurement error. Raising a weekly autoregression to the 52nd power yields junk. Greg likes them, and gave a stirring reminder of Bob Hodrick's point that you can include lags of the forecasting variables instead.

Nick Roussanov presented his paper with Erik Gilje and Robert Ready, Fracking, Drilling, and Asset Pricing: Estimating the Economic Benefits of the Shale Revolution with Wei Xiong discussing. They track the reaction of stock prices to the shale oil boom. In particular, they showed that stocks which rose on a huge shale announcement subsequently rose even more as more good shale news came in. Until, as Wei pointed out, prices collapsed.

Nick also used stock market value to try to get at an estimate of the economics benefits of fracking. It's a worthy effort, but let's remember the difficulties. In a competitive no-adjustment cost world, profits are zero and there are no abnormal stock returns. Stock capitalization may rise, as firms issue stock to invest. But that measures the value of capital invested, not the consumer surplus of shale. Still, the general idea of mixing asset pricing, energy economics, and making economic measurements from stock prices is intriguing.

Jonathan Sokobin, Chief Economist, FINRA presented "An Overview of FINRA Data" which I alas had to miss. I'm delighted anyone from the government wants us to use their data!

The AP meeting has a nice tradition. Usually the most boring part of a conference is the author's response to discussant. The AP meetings do away with this -- or rather, the author can respond if someone in the audience raises his or her hand and says "I'd like to hear your response to x." That actually happened! But by and large the AP meetings preserve time and a tradition of very active participation and discussion, and this one was no different.


Thứ Ba, 5 tháng 4, 2016

Next Steps for FTPL

Last Friday April 1, Eric Leeper Tom Coleman and I organized a conference at the Becker-Friedman Institute,  "Next Steps for the Fiscal Theory of the Price Level." Follow the link for the whole agenda, slides, and papers.

The theoretical controversies are behind us. But how do we use the fiscal theory, to understand historical episodes, data, policy, and policy regimes? The idea of the conference was to get together and help each other to map out this the agenda. The day started with history, moved on to monetary policy, and then to international issues.

A common theme was various forms of price-related fiscal rules, fiscal analogues to the Taylor rule of monetary policy. In a simple form, suppose primary surpluses rise with the price level, as
\[ b_t = \sum_{j=0}^{\infty} \beta^j \left( s_{0,t+j} + s_1 (P_{t+j} - P^\ast) \right) \]
where \(b_t\) is the real value of debt, \(s_{0,t}\) is a sequence of primary surpluses budgeted to pay off that debt, \(P^\ast\) is a price-level target and \(P_t\) is the price level. \(b_t\) can be real or nominal debt \( b_{t}= B_{t-1}/P_t\), but I write it as real debt to emphasize the point: This equation too can determine price levels \(P_t\). If inflation rises, the government raises taxes or cuts spending to soak up extra money. If inflation declines, the government does the opposite, putting extra money and debt in the economy but in a way that does not trigger higher future surpluses, so it does push up prices.

(Note: this post has embedded figures and mathjax equations. If the last paragraph is garbled or you don't see graphs below, go here.)

That idea surfaced in many of the papers.


The morning had several papers studying the gold standard and related historical arrangements. To a fiscal theorist the gold standard is really a fiscal commitment. No gold standard has ever backed its note issue 100%; and none has even dreamed of backing its nominal government debt 100%. If a government had that much gold, there would be no point to borrowing.

So a gold standard is a  commitment to raise taxes, or to borrow against credible future taxes, to get enough gold should it ever be needed. The gold standard says, we commit to pay off this debt at one, and only one, price level. If inflation gets big, people will start to want to exchange money for gold, and we'll raise taxes. If inflation gets too low, people wills tart to exchange gold for money, and we'll print it up as needed. Usually, in the fiscal theory,
\[ \frac{B_{t-1}}{P_t} = E_t \sum_{j=0}^{\infty} \beta^j s_{t+j}\]
the expectation of future surpluses is a bit nebulous, so inflation might wander around a lot like stock prices. The gold standard is a way to commit to just the right path of surpluses that stabilize the price level.

A summary, with apologies in advance to authors whose points I missed or misunderstood:

Part I: History




George Hall presented his work with Tom Sargent on the history of US debt limits, together with a fantastic new data set on US debt that will be very useful going forward.


Price of a Chariot Horse: 100,000 Denarii
François Velde and Christophe Chalmley took us on a lighting tour of monetary arrangements across history, prompting a thoughtful discussion on just where Fiscal theory starts to matter and where it really is not relevant. (François easily gets the prize for the best set of slides. Picking just one was hard.)

Michael Bordo and Arunima Sinha presented an analysis of suspensions of convertibility: Governments temporarily abandon the gold standard during war, then go back at parity afterward. Maybe. By going back afterward, people are willing to hold a lot of unbacked debt and currency during the war. But sometimes the fiscal resources to go back afterward are tough to get, the benefits of establishing credibility so you can borrow in the next war seem further off. When people are unsure whether the country will go back, the wartime inflation is worse, and the cost of going back on parity are heavier. They analyze France vs. UK after WWI.


Martin Kleim took us on a tour of a big inflation in a previous European currency union, the Holy Roman Empire in the early 1600s. Europe has had currency union without fiscal union for a long time, under various metallic standards and coinages.  In this case small states, under fiscal pressure from the 30 years' war, started to debase small coins, leading to a large inflation. It ended with an agreement to go back to parity, with the states absorbing the losses. (In my equation, they needed a lot of surpluses to match \(P\) with \(P^\ast\)). We had an interesting discussion on just where those funds came from. Disinflation is always and everywhere a fiscal reform.


Margaret Jacobson presented her work with Eric Leeper and Bruce Preston on the end of the gold standard in the US in the 1930s. (Eric modestly stated his contribution to the paper as finding the matlab color code for gold, as shown in the graph.)  Margaret and Eric interpret the fiscal statements of the Roosevelt Administration to say that they would run unbacked deficits until the price level returned to its previous level, the \(P^\ast\) in my above equation.  Much discussion followed on how governments today, if they really want inflation, could achieve something similar.

 Part II Monetary Policy 

Chris Sims took on that issue directly. If you want inflation, just running big deficits might not help. Hundreds of years in which governments built up hard-won reputations that when they borrow money, they pay it off, are hard to upend immediately. Even if you want to break that expectation -- all our governments have mixed promises of stimulus now with deficit reduction later.  A devaluation would help, but we don't have a gold standard against which to devalue, and not everyone can devalue relative to each other's currency.

Chris' bottom line is a lot like Margaret and Eric's, and my fiscal Taylor rule,
Coordinating fiscal and monetary policy so that both are explicitly contingent on reaching an inflation target — not only interest rates low, but no tax increases or spending cuts until inflation rises. 
But,
• This might work because it would represent such a shift in political economy that people would rethink their inflation expectations.
Chris led a long discussion including thoughts on rational expectations -- it's a stretch to impose rational expectations on policies that have never been tried before (though our history lesson reminded us just how few genuinely novel policies there are!)

Steve Williamson followed with a thoughtful model full of surprising results. The stock of money does not matter, but fed transfers to the treasury do. (I hope I got that right!)

My presentation (slides also  here  on my webpage) took on the "agenda" question. The basic fiscal equation is
\[\frac{B_{t-1}}{P_t} = E_t \sum M_{t,t+j} s_{t+j} \]
For the project of matching history, data, analyzing policy and finding better regimes, I opined we have spent too much time on the \(s\) fiscal part, and not nearly enough time on the \(M\) discount rate part, or the \(B\) part, which I map to monetary policy.

I argued that in order to understand the cyclical variation of inflation -- in recessions inflation declines while \(B\) is rising and \(s\) is declining -- we need to focus on discount rate variation. More generally, changes in the value of government debt due to interest rate variation are plausibly much bigger than changes in expected surpluses. As interest rates rise, government debt will be worth a lot less, an additionan inflationary pressure that is often overlooked.

Then I presented short versions of recent papers analyzing monetary policy in the fiscal theory of the price level. Interest rate targets with no change in surpluses can determine expected inflation, but the neo-Fisherian conundrum remains.



Harald Uhlig presented a skeptical view, provoking much discussion.  Some main points: large debt and deficits are not associated with inflation, and M2 demand is stable.

I found Harald's critique quite useful. Even if you don't agree with something, knowing that this is how a really sharp and well informed macroeconomist perceives the issues is a vital lesson. I answered somewhat impertinently that we addressed these issues 15 years ago: High debt comes with large expected surpluses, just as in financing a war, because governments want to borrow without creating inflation. The stability of M2 velocity does not isolate cause and effect. The chocolate/GDP ratio is stable too, but eating more chocolate will not increase GDP.

But Harald knows this, and his overall point resonates: You guys need to find something like MV=PY that easily organizes historical events. The obvious graph doesn't work. Irving Fisher came up with MV=PY, but it took Friedman and Schwartz using it to make the idea come alive. That is the purpose of the whole conference.


Francesco Bianchi presented his work with Leonardo Melosi on the Great Recession. New Keynesian models typically predict huge deflation at the zero bound. Why didn't this happen? They specify a model with shifting fiscal vs money dominant regimes. The standard model specifies that once we leave the zero bound we go right back to a money-dominant, Taylor-rule regime with passive fiscal policy. However, if there is a chance of going back to a fiscal-dominant regime for a while, that changes expectations of inflation at the end of the zero bound. Even small changes in those expectations have big effects on inflation during the zero bound (Shameless plug for the New Keynesian Liquidity Trap which explains this point very simply.) So, as you see in the graph above, the "benchmark" model which includes a probability of reverting to a fiscal regime after the zero bound, produces the mild recession and disinflation we have seen, compared to the standard model prediction of a huge depression.



Fiscal policy is political of course. Campbell Leith presented, among other things,  an intriguing tour of how political scientists think about political determinants of debt and deficits. My snarky quip, we learned with great precision that political scientists don't know a heck of a lot more than we do! But if so, that is also wisdom.

Part III International

red line regime switching probability of 30%, blue line 0 % 

Alexander Kriwoluzky presented thoughts on a fiscal theory of exchange rates, applying it to the US vs. Germany, the abandonment of the gold standard and switch to floating rates in the early 1970s. An exchange rate peg means that Germany must import US fiscal policy as well, importing the deficits that support more inflation. Germany didn't want to do that.  People knew that, so a shift to floating rates was in the air. Expectations of that shift can explain the interest differential and apparent failure of uncovered interest parity.


Last but certainly not least, Bartosz Maćkowiak presented a thoughtful analysis of "Monetary-Fiscal Interactions and the Euro Area’s Malaise" joint work with Marek Jarosińsky.

Echoing the fiscal Taylor rule idea running through so many talks, they propose a fiscal rule
\[ S_{n,t} = \Psi_n + \Psi_B \left( B_{n,t-1} - \sum_n \theta_n B_{n,t-1} \right) + \psi_n (Y_{n,t}-Y_n) \]
In words, each country's surplus must react to that country's debt \(B_n\), but total EU surpluses do not react to total EU debt. In this way, the EU is "Ricardian" or "fiscal passive" for each country, but it is "non-Ricardian" or "fiscal active" for the EU as a whole. In their simulations, this fiscal commitment has the same beneficial effects running through Leeper and Jabcobson, Bianchi and Melosi, Sims, and others -- but maintaining the idea that individual countries pay their debts.

A big thanks to the Harris School and the Becker-Friedman Institute who sponsored the conference.




Thứ Năm, 31 tháng 3, 2016

Neo-Fisherian caveats

Raise interest rates to raise inflation? Lower interest rates to lower inflation? It's not that simple.

A correspondent from an emerging market wrote enthusiastically. His country has somewhat too high inflation, currency depreciation and slightly negative real rates. A discussion is going on about raising rates to combat inflation. Do I think that lowering rates in this circumstance is instead the way to go about it?

As you can tell, posing the question this way makes me very uncomfortable! So, thinking out loud, why might one pause at jumping this far, this fast?

Fiscal policy.  Fiscal policy deeply underlies monetary policy. In my own "Fisherian" explorations, the fiscal theory of price level is a deep foundation. If the government is printing up money to pay its bills, the central bank can do what it wants with interest rates, inflation is coming anyway.


Conversely, underlying the decline in inflation in the US, Europe, and Japan is an extraordinary demand for nominal government debt.

Bond markets seem to think we'll pay it off. And that is not too terribly an irrational expectation. Sovereign debts are self-inflicted wounds. A little structural reform to get growing again, tweaks to social security and medicare, and next thing you know we're back in the 1990s and wondering what to do when all the government bonds are paid off. Also, valuation is more about discount rates than cashflows. People seem happy -- for now -- to hold government debt despite unusually low prospective returns.

My correspondent answers that his country is actually doing well fiscally.  However, his country is also a bit low on reserves and having exchange rate and capital flight problems.

But current deficits are not that important to inflation either in theory or in fact. The fiscal policy that matters is expectations of very long term stability, not just a few years of surpluses. Also, contingent liabilities matter a lot. If investors in government debt see a government that will bail out all and sundry in the next downturn, or faces political risks, even temporary surpluses are not an assurance to investors.  (Craig Burnside, Marty Eichenbaum and Sergio Rebelo's "Prospective Deficits and the Asian Currency Crises, in the JPE and ungated here is a brilliant paper on this point.)

Rational expectations. The Fisherian proposition also relies deeply on rational expectations. In the simplest version, \( i_t = r + E_t \pi_{t+1} \), people see nominal interest rates rise, they expect inflation to be higher, so they raise their prices. As a result of that expectation inflation is, on average, higher. (Loose story alert.)

How do they expect such a thing? Well,  rational expectations is sensible when there is a long history in one regime. People see higher interest rates, they remember times of high interest rates in the past, like the late 1970s, so they ratchet up their inflation expectations. Or, people see higher interest rates, and they've gotten used to the Fed raising interest rates when the Fed sees inflation coming, so they raise their expectations. The motto of rational expectations is "you can't fool all of the people all of the time," not "you can never fool anyone," nor "people are clairvoyant."

The Fisherian prediction relies on the interest rate change to be credible, long-lasting, and to lead to the right expectations. A one-off experiment, that might be read as cover for a dovish desire to boost growth at the expense of more inflation, and that might be quickly reversed doesn't really map to the equations. Europe and Japan, stuck at the zero bound, with a fiscal bonanza (low interest costs on the debt) and slowly decreasing inflation expectations is much more consistent with those equations.

Liquidity. When interest rates are positive and money does not pay interest, lowering rates means more money in the system, and potentially more lending too. This classic liquidity channel, which goes the other way, is absent for the US, UK, Japan and Europe, since we're at the zero bound and since reserves pay interest.  (Granted, I couldn't get the equations of the liquidity effect to be large enough to offset the Fisher effect, but that depends on the particulars of a model. )

Successful disinflations. Disinflations are a combination of fiscal policy, monetary policy, expectations, and liquidity. Tom Sargent's classic ends of four hyperinflations tells the story beautifully.

Large inflations result from intractable fiscal problems, not central bank stupidity. In Tom's examples, the government solves the fiscal problem; not just immediately, but credibly solves it for the forseeable future. For example, the German government in the 1920s faced enormous reparations payments. Renegotiating these payments fixed the underlying fiscal problem. When the long-term fiscal problem was fixed, inflation stopped immediately. Since everybody knew what the fiscal problem was, expectations were quickly rational.

The end of inflation coincided with a large money expansion and a steep reduction in nominal interest rates. During a time of high inflation, people use as little money as possible. With inflation over, real money demand expands.  There was no period of monetary stringency or interest-rate raising preceding these disinflations.

So these are great examples in which the Fisher story works well -- lower interest rates correspond to lower inflation, immediately. But you can see that lower interest rates are not the whole story. The central bank of Germany 1922 could not have stopped inflation on its own by lowering rates.  I suspect the same is true of high inflation countries today -- usually something is wrong other than just the history of interest rates.

So, apply new theories with caution!

To the raising interest rates question for the US and Europe, some of the same considerations apply. We won't have any liquidity effects, as central banks are planning to just pay more interest on abundant reserves. Higher real interest rates will raise fiscal interest costs, which is an inflationary shock by fiscal theory considerations. The big question is expectations. Will people read higher interest rates as a warning of inflation about to break out, or as a sign that inflation will be even lower?