Sunday 25 April 2010

The effect of Basel 3 and other bank regs...Winners and Losers

The net effect of all these regs to discourage financial innovation, increase capital and liquidity costs, encourage consolidation based on scale economies, discourage moves to universal banking based decreased scope economies, encourage moves to lower tax regimes (mainly in the east), discourage securitization, discourage otc derivs, encourage exchanges and CCPs, increase risk management especially CCR and MR, encourage collateralization, make trading books smaller, banking books bigger, increase demand for ops efficiency,increase reg arbitrage, increase compliance and reporting costs, discourage alternative assets.

Friday 23 April 2010

Moral Hazard is a wonderful thing! NOT.

Moral Hazard is a wonderful thing! Part of the fundamental role of regulatiom is to avoid it, and yet in our increasingly short term world, politicians and voters alike are unwilling to take short term pain for a long term benefit. Like letting some of these companies pay the price of poor liquidity risk management. Even liquidity/capital injections just make the problem more trenchant the next time. Sometimes i think that risk management only works against a stable context, some lender or provider of liquidity of last resort, a government, an insurance company, and IMF, and without it the network of relationships crumbles. As institutions get bigger and more global, there is no longer some JPMorgan in the background ready to bail out the system. Perhaps this suggests too big to fail really is just too big...

Thursday 22 April 2010

Proprietary trading at federally insured banks

I have never really bought into the need for proprietary trading at federally insured banks. The ugly undisclosed truth is that most banks have no competitive advantage in much of their trading activities, if these operations were evaluated on a risk adjusted basis, they would be, and have been, value destroying. If truly private institutions want to do this that's between them and their shareholders, but when public guarantees are made, then the government has a responsibility to intervene. Moral Hazard is alive and well. Banks like the rest of us, delude themselves into believing short term gains are indicative of internal competence. In a somewhat efficient market, most of us have beta but alpha is and will always be a rarity.

Technocrats vs Politicians Round 1...

The world of bank regulation is being torn between two very different agendas. First, the technocrats of Basel, as they struggle to put in place amendments to the comprehensive version (June 2006) that has formed the bedrock of much market and credit risk management thinking for the past few years. BIS has introduced a host of new consultative documents, subject to ratification, that promise significant increases in the levels of regulatory capital and liquidity particularly for the trading book. Initiatives like the stress var calculation, the calculation of an incremental risk capital charge for users of internal models for specific risk, counterparty credit risk capital increases, the Liquidity Coverage Ratio, the Net Stable Funding Ratio and the Leverage Ratio, all proffer band aids to weaknesses in the original accord. The danger is that the technocrats responding to real issues in the original accord lose the "high ground" and surrender the two principles that made Basel II such an innovation - capital is alligned with risk, and the better the job we do of measuring risk the lower the capital charge. Unfortunately the "band aid" approach understandable as it is, risks making an already highly complex set of regulations, conflicting, opaque and even more open to the regulatory arbitrage that brought down its predecessor - Basel I.

So not surprising therefore is the new found emphasis on the agenda of the second group of actors in this drama - the politicians. They are reacting to market and economic crisis, and selling their approaches to an angry and impatient tax paying public. Here initiatives are basically four fold. In the US, a raft of initiatives have been proposed by the Obama government, in general limiting the scope and the size of federally insured bank activities. The real challenge of all these initiatives is of course congress, where well funded lobbyists may easily derail the worst vicissitude of the proposals. The UK has taken a different approach, experimenting with macro prudential regulation an structural regulatory change, but they face two different challenges - increasing regulation yet not killing the Golden Goose of the City that sustains a sizable chunk of the UK economy. Added to which is the upcoming election which may endanger any initiative. The EU has been hamstrung by government deficits in certain member countries and has revealed just how hard it is get cross the board agreement in such national sensitive economic issues. Finally enter the G20 and the IMF, frustrated at the lack of regulatory changes a full year after the crisis, has argued for the simplest, but possibly most destructive approach of all, bank taxes based on the size of liabilities.

So whose constraints will determine the future of banking for the next ten years? I vote for the politicians - but it will further push financial innovation to the east and away from established financial centers.

Sunday 18 April 2010

On having multiple discount functions

Another interesting development in the world of Behavioural Finance is the idea that individuals have multiple discount functions. For example, researchers such as Prof David Laibson argue that we have at least 2 discount functions, one for the very short term say minutes, hours, days, and then another for the longer term. The tension the investor faces is the tradeoff between these two. For example, simple exponential discount functions argue that the change in  discount rate is more or less constant over time. Yet consider this thought experiment - imagine you were given two sets of choices 1) a) $100 now or b) $101 in 60 mins from now and 2) a) $100 in 1 week or b) $101 in 1 week and 60 mins. Almost everyone will choose 1a and 2b but this is not consistent with a simple discount function. The attraction of the immediate is overwhelming for most people, percieved as far more beneficial than any delayed extra value. Yet beyond the immediate people switch to a more rational process incorporating the extra delayed benefit. Its as if they have two discount functions, one that kicks in for the near term, emphasizing short term benefits and the second which is used to evaluate longer term costs and benefits. Even more fascinating some researchers are arguing that these discount functions actually correspond to different more or less developed parts of the cerebral cortex, associated with high level reasoning and more emotional animalistic instincts. But these discount functions are not merely quantitatively different, they seem to be qualitatively different too. For example, when you change the reward, say to ice cream or a candy bar, the short term benefits even more massively outweigh longer term benefits.

Of course, marketing people instinctively know these things we suspect, the offer of the free ipod when you sign up for a new credit card, peoples tendancy to spend hours negotiating on physical aspects of a car purchase, and then only a moment to decide whether to get auto financing (which is where most of the profits are made).

So what is the punchline - real physical things obtained immediately are valued much MUCH more than abstract non physical things delayed.

What do probabilities mean to people? The Cost of Certainty

One of the most interesting notions in Prospect Theory, is the idea that individuals really don't understand probabilities. That is they make little distinction between similar probabilities (say the difference between 0.25 and 0.3) and treat them essentially the same. However that is not true at the extremes, we seem to have a cognitive bias in favor of zero and 1 as absolute certainties. Yet ironically, absolute certainties are never the case. Mathematically this is described as a weighting function that converts quantitative probabilities into qualitative weights - using a function like the following:
So basically we underestimate probabilities when they are slightly less than 1 and overestimate them if they are slightly more than zero. What does this mean for finance? Well, people will pay more to receive contingent cashflows which are very unlikely because they will overstate the actual probability. This is part of the attraction of lotteries - although the probability is small, there is a small probability of success and that makes all the difference. Similarly investors will heavily discount cashflows that are not absolutely certain.

Saturday 17 April 2010

Prospect Theory and what it means for economics

Traditional Economics is based on the notion of a utility function which increases as a concave function) with increasing levels of wealth. Our decisions as economic actors therefore boil down to selecting the option that generates the highest expected utility. But ask yourself this, faced with a fair one off gamble in which you might make 2000 dollars or you might lose 1000 dollars, what would you do? Most people would choose not to gamble even though the expected utility will be positive. Why? Note that this is not true if we were able to play the game say a thousand times - in which case almost everyone will take the bet.
Behavioral theorists suggest this unwillingness to take the single bet is because we care about losses more than gains, counter to the principles of traditional utility theory, and so have posited a prospect theory that charges a greater value loss for losses in current wealth that for gains. If you like, the utility function is "kinked" around the current level of wealth.

Most of financial markets are based on a zero sum game. Consider a structured product. If I win, you must lose and vice versa. Financial engineering can only add value to such transactions by exploiting the differences in the preferences of the transactors. For example by emphasing the elements of the transaction that most appeal to the other side, say high returns, or delayed payment, or low risk. Being a zero sum game, these elements are paid for by subtracting from aspects of the transaction that the counterparty does not care so much about, like operational complexity, or like risk.  Value is created from the transaction (but not utility however) if the structure reflects the differences in the counterparties wants and needs. Prospect theory  acknowledges that value is inherently client specific (it depends on their reference point for example), and that financial product design is really about understanding behavioral biases in order to better align. So is Prospect theory purely descriptive while Utility theory is normative? Should we give up structured products because they pander to our cognitive weaknesses? Although Prospect theory is partly descriptive, it is also normative. Other counterparties have cognitive biases too, and we must interact with them, so it behooves us to understand and acknowledge these biases/limitations/constraints in our transactions with them.

Models and Overconfidence

Whether a scientist, or a financial engineer, when we invest in an idea, a model, a way of thinking, by definition, we believe it to be an accurate representation of the world, and so interpret new information in terms of this model. New information that is consistent with this model makes us more confident in its use, while new conflicting information is often regarded as spurious, exceptional or irrelevant. The model provides us with an anchor, and biases us away from alternatives.  We protect our models from the world of evidence, in the same way we protect ourselves, no surprise here, as in some sense these models constitute ourselves. The models change only with crisis, some overwhelming surge of evidence that cannot be ignored. Ironically, financial models, (unlike models of physics) when broadly and consistently acted upon, will often build up arbitrage and economic pressures in the market to break the models. So no one can see the breakdown coming, everyone is shocked when it does happen, and yet everyone can rationalize after the fact why the crisis occured.
Perhaps the truth is that reality is more complex than we can ever know, and like a good buddhist, all we can do is acknowledge and respect the complexity that we face, realizing like the quantum physicist, that the act of understanding the world, also changes it into something else.

The Curse of Overconfidence and some revealing party tricks...

Irving Fisher, probably the greatest american economist of his generation (most known for his theory of interest rates), remarked famously just before the great crash of 1929 that "Stock prices have reached what looks like a permanently high plateau". To be fair, he put his money where he mouth was, and invested much of his assets in the stock market. Soon after he was broke, and had to sell his home in NewHaven Connecticut to pay his debts (fortunately his university bought his house and let him live there, or he would have been destitute!). Even after his experience, he continued to argue that he had been right.
The curse of overconfidence is not unique to economists, indeed one of the major trends in new economic thought is behavioral finance, applying all of social science theory to finance, not just economics. And one the cornerstones of this theory is that people are overconfident, inconsistent, lazy, and simply stupid at times. In short, not the "homo economicus" of classical theory. Consider one's own overconfidence - ask yourself or your friends to define a 90% [lower,upper] range for answers to the following questions (no cheating!), and find out if you indeed tend to get 9 out of 10 them right.
  1. What is the population of Turkey?
  2. What is the weight of the Empire State Building? (tons)
  3. What is the GDP of Australia?
  4. How many cells in the human brain?
  5. How likely is it that you will be struck by lightening this year?
  6. How many bibles are there in the world?
  7. How many children will die of malnourishment in the world in the next day?
  8. How likely is it that you will be in a plane crash this year?
  9. What is the likelihood of a asteroid hitting the earth in the next year and destroying all life on the planet?
  10. What is the likelihood of a AAA bond defaulting in the next year?
 Answers (no peeking)
  1. 72,561,312 as of 1st January 2010
  2. 365,000 tons
  3. $1 trillion USD
  4. 100 billion
  5. 1/750,000
  6.  6,000,000,000
  7. 3000
  8. 1/675,638
  9. 0.00000002
  10. 0.1
Most people (assuming they play fair) get significantly fewer than 9 out of 10 of these questions right. They believe the world is less complex and better understood than it actually is, and this influences their behaviour. Unfortunately sometimes with disasterous results - i am reminded of the quote from Larry Kersten, an american sociologist that "before you attempt to beat the odds, be sure you could survive the odds beating you.”



 



 

Moves to increase transparency in the OTC market

It’s clear that the juggernaut of transparency will be hard, if not impossible to resist. And this is generally a good thing for most derivatives users, even in sophisticated markets like the US. For far too long, OTC derivatives have hidden risk and complexity from users and regulators under the guise of “customization”, providing much value to product innovators but little real value to end users. Eventually, the essentially one-sided nature of much of this technology will be revealed. However, the real risk is that the pendulum will swing too far in the opposite direction, preventing even sophisticated end users laying off their risks with bilateral contracts. Part of the limitations of such a one-sided response is the assumption that transparent markets are a cure-all. They are not. Transparency is about having a information baseline against which all transactors in the market can make their decisions. However derivatives information does not necessarily equate to knowledge or expertise in derivatives product use. All the information in the world does not create an adequate valuation or hedging model, and even less does it create an understanding of the limitations of the valuation models used.
So Regulation should increase transparency of transactions, but it should also target the expertise of potential counterparties. Some institutions (much like non qualified investors) should not be transacting with counterparties if the expertise imbalance is simply too great. And frankly given the huge investments financial services have made in financial engineering human capital, this may often be the case.

Thursday 8 April 2010

The Rise of Collateral and the Urgent Need for Collateral Risk Management

The downturn has clearly pushed Collateralization to the fore of OTC derivatives, repos and securities financing. What was often viewed as a luxury, is now seen as a necessity as counterparty risk is not confined to low rated entities (sic. Lehmans!). But the rise of collateralization does not destroy the problem faced by counterparties. Collateral may increase some business opportunities but it also converts counterparty credit risk into operational and legal risks and into residual market and residual credit risk. This new bundle of risks is certainly different from what was held before and arguably is more complex. Consider Operational Risk is implied in everything from reconciliation to posting collateral with a custodian, from collateral valuation to settlement. Legal Risk is endemic to the OTC derivatives space and is of course the rationale for the various master ageements (ISDA, GMRA etc) and associated credit annexes, many of which remain to be tested in emerging markets legal jurisdictions. Residual Risk always remains, whether it be from market risk - the changing value of the collateral, or the changing risk sensitive margins, or even reinvestment risk; or from credit risk - wrong way risk - the possibility of correlations between counterparty default and collateral values. In short, collateralization is a good thing, one to encourage, provided one realizes that risk is not removed, merely converted into another form. And these new risks must be managed if collateral is not to give a false sense of security in these turbulent times.

The coming Pension Fund Crisis and how it will change the world

The next big thing, after the sovereign debt crisis that is slowly percolating through the financial system, is the pension fund crisis. Basically the problem is that public and private pension funds across the world have long managed their balance sheet as two independent elements - assets and liabilities and the concept of ALM has never been accepted or acknowledged within the industry. Liabilities - the pensions themselves, very much managed as an actuarial portfolio, usually based on the demographics of a particular fund and its covered employees is best thought of as a set of zero coupon bonds with limited optionality, the duration of which reflects the average life expectancy of the average insured individual. This of course varies by pension plan, and no surprise therefore that the asset side, the funding of these liabilities must depend on a detailed analysis of the cash flows associated with these liabilities. In short we need to build a replicating portfolio for the liabilities (just like in Bank ALM) and use that to define the performance expectations of the asset portfolio. This is rarely done - instead pension funds use fund of funds to "divide and conquer" the asset portfolio and evaluate that performance on existing standard market benchmarks for bonds, equities and other assets. But of course, performance for a pension fund means the ability to fund the liabilities, it does not mean the ability to outperform a bond index like the Barclays Capital Aggregate Bond Index - which is typically industry practice. Analysing risk for a pension fund means analysing the risks to achieving performance, in other words, how likely is it that our asset portfolio will underperform the liability portfolio. With a liability replicating portfolio, we can build var and shortfall models that allow us to monitor and manage the net asset position over time. The lack of this capability is not merely a problem for a few pension funds. The mismatch of mark to market values of asset and liabilities in corporate pension funds is of the order of trillions (yes - trillions) of dollars in the US alone (UK, Russia, Italy also big problems). If there is continued fall in the value of equities this is likely to get worse, and be a drain on corporate performance for years to come.

Sunday 4 April 2010

What does it mean to hold a diversified portfolio?

Imagine we have a portfolio of equal positions in different securities? Is this diversified? Probably not. What if these securities were all bonds with small volatilities, and one equity position with a much higher volatility - that clearly would not count as properly diversified. So individual security variances need to be considered when thinking of diversification. What about correlation? This too should be included. The best measure of diversification probably considers the contribution of the pieces to the overall variance of the portfolio. But a complication here is that variance is not additive - so instead we often talk of factor variances, calculated based on some form of Principal Component Analysis (PCA) - basically computing the eigenvectors of the original covariance matrix in order to produce orthogonal risk vectors (whose variances are then additive). Having PCA risk factor variances then allows us to estimate the cumulative contribution of different securities to aggregate variance starting with the greatest factor contribution. Much like a Pareto Analysis, we ask how different is the actual contribution to perfectly diversified case - a straight line, and this gives us a robust measure of diversification.

Socialism and Risk Management

So what does socialism have to do with risk management? Everything! Socialism, whether it is in the form of a kibuttz, or Robert Owen's New harmony, or Karl Marx's Communism, is ultimately about the provision of a safety net for all workers. High but unequal and uncertain levels of income are sacrificed for equality. Or to put more poetically in the words of Marx, from each according to his ability, to each according to his need. It is a form of risk reduction based on pooling of risks and of resources. Much like social security or insurance, it works because individual losses (and gains) are shared across a wider population. And of course losses are usually feared more than gains are valued, so such insurance is deemed to have value. The problem with such socialism (and for that matter, communism and insurance) is moral hazard. The old Soviet joke "we pretend to work and they pretend to pay us" captures it all. Moral Hazard is endemic to risk sharing particularly when the potential outcomes are partly under the control of the individuals themselves. The fire insurance that encourages arson, the seatbelts that encourage fast driving, the federal deposit insurance that discourages due dilligence in banks. Most of political philosophy is actually an extension of this idea, the production and the distribution of resources in an uncertain world. Unfortunately our positions on such political ideas are usually coloured by our current states, and the inherent moral hazard associated with that state as we try to game the system for our personal benefit. Philosophers like John Stuart Mill, Rawlings have said much the same from the liberal tradition.Howver even some conservative thinkers, particularly communitarians, have realized that we are part of a society and our survival as individuals depends on our building networks that share opportunities and risks within that society.

Thursday 1 April 2010

Counterparty Risk Management - Best of times? Worst of times?

It was the best of times. It was the worst of times...Stealling a few classic lines out of Dickens' "A tale of two cities" is apropos for today's post - the new demand for counterparty credit risk management and collateralization. The best of times for counterparty credit risk? Not a surprise that the crisis has made all and sundry realize that highly rated counterparties such as Lehman Brothers can default, and in turn has push counterparty credit risk to the top of the agenda for risk managers and regulators alike. BCBS in particular is pushing for incremental risk capital charges for traded credit (including counterparty credit risk) and is increasing incentives for centralized counterparties and collateralization as a means to significantly reduce counterparty credit exposures. Nor are the regulators comfortable with banks' or the rating agencies ability to track counterparties PDs proactively. So naturally the focus on counterparty credit turns to exposures management through techniques like mark to market valuations plus addons or even potential future exposures. Although arguably more representative,the latter is a major computation challenge for many institutions requiring huge monte carlo simulations over long time periods. Not surprising therefore, that enterprise collateral management is the flavor of the month in counterparty credit risk circles.

But is it the worst of times? The same regulatory pressures have discouraged many from otc derivatives and securities borrowing/lending particularly in the developed world. In Asia by contrast, OTC derivatives do have the same stigma they have in the US and Europe.

Sunday 14 March 2010

Basel 3 = Basel 1.8? And is this a bad thing?

The proposed and partly ratified changes imposed on the Basel II accord have certainly exercised the minds of bankers (and a surprising number of others) over the last year. But are they really a major change? Do they reflect any change of philosophy? Or are they actually a step backwards away from some of the basic assumptions of Basel II. I believe that these widely touted Basel 3 requirements are better understood as a partial rollback of the original accord.

The basic concept of the Basel accord is the better alignment of capital with institutional risk as measured by extreme percentiles of the distribution of portfolio value. A couple of points come out of this:

• Banks should operate on a consistent global playing field.

• Market values are good measures of asset values

• More sophisticated approaches to risk measurement should be rewarded with lower capital

• Avoid double counting of Risk and incorporate where robust correlation effects

• Limit the extent of ad hoc regulatory supervision.

• Quantitative measures are better than qualitative ones


In some ways the new adjustments are consistent with that philosophy. For example, the introduction of an Incremental risk capital calculation to capture credit risk in the trading book. This is very much in line with the concepts of IRB and IMM, indeed is almost an integration of them both – which incidentally is why it will be difficult to implement. Similarly, macroprudential risk is an extension of scope of the Basel II accord to consider the entire system rather than one element within the system. So too the changes to the definitions of capital – the removal of T3 capital, the more precise use of equity in T1.



But I believe the bulk of the modifications run counter to the underlying philosophy of Basel. For example, the double counting of risk under Stress VaR and the general VaR calculations, the double counting of IRC and the specific market risk capital models. The use of simple Leverage and liquidity ratios as an additional constraint on banks to reign in lending may be more binding on many banks than the capital requirement. The use of liquidity adjustments to valuation clearly notes the limitations of market prices as measures of asset value. The introduction of a range of additional liquidity metrics that will be incorporated into regulatory judgments on appropriate bank liquidity levels.



Is this a bad thing? Should we be concerned that Basel III is moving away from the original philosophy of Basel. Yes and No.

Yes, in so far as the original philosophy was valid, no in so far as it was not. It seems to me that we definitely should be concerned about double counting and the introduction of ad hoc rules that leave much to the judgment of individual regulators – this definitely casts doubt on the Basel project – the imposition of a consistent playing field.

No – we should not be concerned about these amendments in that they reflect true weaknesses of the original accord – the lack of attention they gave to liquidity risk, downplaying of counterparty credit risk in the trading book (which to be fair was less of a concern when the accord was drafted), the inevitable procyclicality of any risk based capital measure. It is indeed the last of these that is probably the most important. Regulators like the rest of us, live in a real world, with banks that affect the real economy. Reality has a bad habit of forcing us to compromise theoretical ideals with the practical effects of those ideals. A major overhaul of Basel that dramatically increased capital requirements (as for example the imposition of countercyclical capital buffers) could jeopardize the very recovery it was design to forestall. And that would be far too big a price to pay for Basel III.

So after all is said and done, perhaps the new amendments do look less like Basel 3 and more like Basel 1.8?

What is Basel III?

The raft of documents produced by the regulators across the world has certainly added to the noise and confusion that the credit crisis has left in its wake. The Basel Committee on Banking Supervision (BCBS) has developed its recommendations to G20 institutions, realizing that radical changes in the regulatory regime could endanger the still fragile recovery after the crisis. So torn between a radical overhaul, and a fear of rocking the already rocky boat, they have produced three major documents:

BCBS 159: The first (BCBS 159) has been ratified by the G20 countries and will be implemented by the end of 2010 in most countries. BCBS 159 focuses on amending the internal models methods (IMM) associated with market risk in the trading book. It requires the introduction of Stress VaR capital calculations in addition to the existing general market var capital calculation. It also requires for international banks implementing internal models for specific risk the additional assignment of Incremental Risk Capital (IRC) – capturing default risk, credit migration risk, credit spread risk in the trading book using similar techniques to those found in the banking book (IRB) – for example, it uses a more conservative confidence interval (99.9%) and a longer time horizon (1 year)

BCBS 164: This has been proposed but not yet ratified. It imposes changes on capital (more focus on Tier 1 capital in general and equity in particular), yet more changes to counterparty credit risk capital in the trading book (above those required in BCBS 159), the imposition of countercyclical buffers and a general leverage ratio constraint. More interestingly it requires the development of macroprudential risk management that go beyond micro prudential risk management (focusing on the risks of particular institutions) and instead looks at the systemic risks of the entire network of institutions as part of a global economy.

BCBS 165: Like BCBS 164, this has yet to be ratified but focuses on the need to implement specific metrics for liquidity risk, a short term 30 day minimum liquidity coverage ratio to handle a major stress scenario, a long term 1 year net stable funding ratio to define a minimum stable funding requirement and finally a set of standard liquidity metrics that all regulators will take into account when evaluating the health of a banking institution.

Hegel and Risk Management

What has Hegel got to do with risk management? Everything actually. Hegel classically argued that change was the continuous dialectic of revolutionary forces and counterrevolutionary forces. The revolution poses a thesis, while the counterrevolution seeks to protect its antithesis. Out of the clash comes a synthesis through which progress slowly appears. What we call risk is often our micro view of a piece of these macro forces causing change. What looks random at one level of analysis, is anything but at a higher level of analysis. When a system's innate contradictions (whether that system be capitalism - See a chap called Marx, or an asset bubble) slowly develops the forces that cause that system to fail or end. Another way of saying this is that something that cannot go on for ever, certainly won't. A painful lesson for all concerned in the wake of the property bubble of the 2008-9. Or for the demographic bubble of the late 60s and 70s. Or the buildup in US armaments in the 80s and 90s. Or the defense spending in the former USSR over the same time. Bubbles build on a short sighted focus on momentum, rather that absolute values. This time its different goes the refrain. No one wants to be one left holding the baby when the music stops.
The risk is not apparent when the music is playing - our experience of the immediate past is too positive and discourages asking awkward and unpopular questions. When the music stops (as stop it must) the risk becomes an event, an event that brings down the house, and we ask ourselves - how could we have been so foolish...

Wednesday 10 March 2010

The Future of Risk Management - the role of Integrity

I think if there is a common theme to the future of risk management, it’s probably the notion of systems integrity. What does integrity mean – I think it means consistency between parts and the whole. In the context of the future of risk management, this notion of integrity has specific meanings:
  • Integrity of front office and middle office systems
  • Integrity between management, traders and middle office
  • Integrity of returns and risk
  • Integrity of remuneration and risk adjusted returns
  • Integrity across risk types especially market and credit
  • Integrity across tail and non tail risks
  • Integrity between economic measures of risk and regulatory measures
  • Integrity between the network of capital and resource providers and the organization

Middle vs Front office - people and systems

Most institutions have quite separate systems for front office (basically functions like valuation, trading and clearing) to their systems for middle office (limit setting, portfolio monitoring, risk analysis, var calculations etc). We often find situations where banks have multiple valuation and risk systems in the same organization for the same portfolio. This encourages game playing and prevents executives have a single view of value or risk at an operational level of trades, positions, counterparties, businesses etc that they need in order to make decisions. Middle office has long been thought of as the poor relation of the financial institution. They have long been viewed as a cost centre that will always lose in any standoff between front office traders wanting to put on one more transaction, and risk managers pushing against limit breaches. This cannot continue. Traditional management accounting structures (P&L center, cost center etc) fail to capture the reality and instead discourage integration, and ignore the reality (certainly in buy side institutions and increasingly on the sell side) that middle office is a value added activity. Of course, management’s role in defining the culture as well as the incentives, is crucial in affecting the integrity between front and middle office.

On Risk Adjusted Returns

The notion of risk adjusted returns or economic value added has long been held as the economic correct way to evaluate opportunities in any business. While the details may be complex and hard to implement in practice, the effort to develop measurement systems that cut across risk (usually in the form of capital) and return (expected or historical) is essential to preventing the near sighted herd mentality that has pushed many financial institutions in the west to the brink. The whole fiasco of high levels of remuneration in institutions which have required extensive government funding, could have largely been avoided (at least ameliorated) if remuneration had been tied to long term risk adjusted returns, not simply to absolute profits.

The Curse of Silo based risk management

Most institutions measure their risks in distinct silos. Indeed the discipline of risk management has been cursed (?) by the proliferation of specialists focusing on risks narrowly defined. The reality is that anything that can cause a change in value is a risk, and our traditional buckets, say market risk, or counterparty credit, or operational risk, are purely human conventions, and our institutions and our systems need to rise above them and think holistically about how these risks interact, particularly in times of stress.

Reconciling Tail Risks and Non Tail Risks ...Or Not.

Traditional measures of risk, most obviously sensitivities like the greeks and percentile measures like VaR, capture only selected dimensions of risk. As Nassim Taleb has memorably pointed out, they fail to capture the extreme losses (tail risks) that could derail the organization. Ironically protecting against these major losses is the whole point of risk capital. For example, both credit (Internal Ratings Based approaches) and market risk (Internal Models Approaches) and operational risk (Advanced Measurement Approaches) use the VaR concept as the definition of minimum regulatory capital. Even worse than this, is the fact that careless use of such VaR measures actually makes tail risks more likely, as traders go long exposures in the tail that have little effect on VaR, but potentially subject the institution to devastating downside. Tail risk measures such as conditional VaR and expected shortfall, do help in this regard, but the truth is, no single measure ever completely captures a complete distribution. So the only solution here is more management understanding of what distributions are about – and this means a more sophisticated management audience for risk results.

Sunday 7 March 2010

Systemic Risk - Only Connect

Systemic Risk always reminds me of one of those Rube Goldberg inventions. Take a look at this video and see what i mean...
http://www.youtube.com/watch?v=qybUFnY7Y8w&feature=youtu.be&a

On Basel III and the difficulty of self regulation

With the so-called Basel III regulations, the G20 central banks are looking to upgrade the Basel II accord in the light of the crisis of the last two years. One of the biggest outstanding issues, is the risk weights assigned to sovereign debt, which remains more or less as before. The risk weights for sovereign debt denominated in foreign currency is based on the sovereign credit rating: AAA to AA (0 per cent risk weight), A+ to A- (20 per cent risk weight), BBB+ to BBB- (50 per cent risk weight); and BB+ to B- (100 per cent risk weight).


Now given that

1) credit ratings are no longer the touchstone of fiscal probity they once were and

2) the central banks are themselves looking to raise capital in the global markets, so lower risk weights make that debt more attractive, and

3) it looks like the world will soon be faced with an impending sovereign debt crisis as government deficits become less and less sustainable.

Does this really make sense that Basel III facilitates the governments funding quite so (L)iberally?

Wednesday 3 March 2010

Volatility Time and Risk Clock Speed

I have always felt that one of the best ways to manage personal risks is to simply sample less frequently. That’s why I read one news journal just once a week (the economist btw!). Risk managers need to understand what they are trying to manage, and it seems to me that this has multiple levels (like Kondratief cycles) First technology, demographics, geopolitics all these things are changing slowly over time say over years. Then real economics, business cycles, demand, supply are changing over say months or quarters. Market prices are changing almost instantaneously. When a risk manager manages in response to price changes, is he concerned with the price change in itself, or as a reflection of some more fundamental change. My concern with the concept of volatility time is that most of the moves in the market are merely noise carrying no real additional information. If our need is to respond to market changes in themselves then volatility time makes some sense. However if our focus is the information provided in those market moves then faster sampling really adds no value, particularly given the time and costs of processing the information to make an informed decision. I am sure there’s a great paper in this somewhere!

Monday 1 March 2010

The Limits of Continuous Finance, Network Risk and the Prisoners’ Dilemma

Modern notions of Market Risk are based on the notion of stochastic calculus which is essentially atheoretical about the nature of the shocks that cause changes in asset prices. For example the whole notion of derivatives pricing using techniques such Ito’s Lemma is based on essentially continuous or discrete random changes in asset prices. Enter the Crisis of 2008/2009. The limitations of such models is made clear as the structural relationships between banks comes to the fore in deciding market trading. We are bank A – do we trade with Bank B? We may have exposure to subprime assets through CDOs and other securitized assets. We don’t know how big that exposure is. Bank B may have exposure to subprime assets. B may have exposures to Banks C and D…None of them know what their exposure is. If we transact with Bank B, we take on counterparty risk, which may be much increased by this unknown exposure (credit ratings don’t help much here). Better is to take government money and not take that exposure. Hence interbank funding dries up, and credit spreads rise dramatically. Transaction/Partnering with other members of your network simply becomes too expensive and too risky. This is much like the classic “Prisoners Dilemma” problem where risk aversion and ignorance about others actions produce a suboptimal solution.
What was consequence of this dilemma? High counterparty exposures allow defaults to propagate through the network of counterparties, one by one, the effects increasing as too big/interconnected to fail (TBTF) institutions’ failure are amplified into a failure of the network as a system. Not surprisingly the regulatory response is one of “macroprudential and systemic risk management” and of course call for break up of institutions whose individual failure could cause system wide problems (e.g., the Volcker Rule etc). But let’s go back, it follows that truly understanding systemic effects requires transparency of the underlying exposures (much like having the prisoners in the prisoners’ dilemma be able to communicate). This is one reason (another is reduced settlement risks) for the rise of centralized clearing (CCP), where one entity has transparency into the network and is able to interject using margin requirements, capital injections when the network looks vulnerable. Suddenly risk management for CCPs becomes critical to managing systemic risk for the entire network.

Sunday 21 February 2010

The Tools of Operational Risk Management

For most risk managers, Operational Risk means just five things.
  1. KRI
  2. Losses
  3. RCSA
  4. Incident Management
  5. Regulatory Capital
First, and possibly most usefully, it means Key Risk Indicators (KRIs), quantitative metrics that capture variance of Key Performance Indicators (KPIs). These might simple counts of incidents over a time period, like the number of payment exceptions over the month, or perhaps the number of overtime days. Second, Operational Risk means loss capture, how much money in terms of opportunity costs have we incurred over the last period. Third, Risk Control Self Assessment (RCSA) allows multiple organizational actors to estimate qualitatively the risks associated with different businesses, processes, incidents in their domain. Incident Management, is the most underused of the techniques - it focuses on how specific problems can be handled adroitly and efficiently, it's priority is internal training and facilitating workflow and efficient processes. Last and probably least useful, is the move towards capital calculation of an operational risk loss distribution, either simplistically through rough approximations like the Basic Indicator Approach and the Standardized Approach, or more ambitiously (but probably no more accurately) with the Advanced Measurement Approach which actually estimates capital by building a loss distibution model based on loss magnitude and loss frequency distributions.

Moral Hazard and Basel II

One of the fundamental precepts of Basel II was simple and yet inspired. It argued that as banks adopt more sophisticated (and probably superior) approaches to measuring risk, their regulatory capital charges should decrease. So for market risk, capital charges for standardized approaches were greater than those for internal models approaches. For credit risk, internal ratings based approaches required less capital than standardized approaches, and for operational risk, AMA and standardized approaches requre less capital than Basic Indicator approaches. But enter crisis driven regulatory changes. Suddenly capital requirements for market risk within a trading book for internal models approaches are likely to more than triple, as modifications to the basic IMA approach such as stress var, removal of tier 3 capital, and the potential introduction of Incremental Risk Capital dramatically increase the capital required. This breaks the fundamental incentive for institutions to move towards more sophisticated (presumably better) risk models. Of course there's the rub, the regulators no longer trust the models. After the crisis is the reaction against quantification of risk and the mathematical finance that underpins it.

Saturday 20 February 2010

VaR and the significance of Model Risk

More than a decade ago i wrote a paper that compared estimates of VaR from different systems, different methodologies, and data sets. Not surprisingly perhaps there were huge differences in outputs. Why should this be? After these models are trying to estimate the same thing for a given portfolio. The variance in VaR esimates is actually a form of Model Risk, caused from alternative systems, models, data sets, even different users. Unlike most other decision support systems (of which VaR is an example), the variance in VaR estimates can be "added" to the actual VaR estimate to produce a VaR that incorporates not just market volatility but also model risk. Take a look at the paper - can be downloaded from:
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1212&rec=1&srcabs=148750
The main points still apply today i believe and slowly we are starting to understand the role of people and systems in interpreting risk measures. Measures are rife with uncertainty, ambiguity and even equivocality that needs to be appreciated even if it cannot be completely understood.

Buy Side vs Sell Side Market Risk Management: What is the difference?

Buy side institutions, such as pension funds, asset managers, hedge funds and the like have a very different perspective to market risk management. Unlike the sell side such as bank, the priority of the buy side is their performance against some benchmark, by which they can determine the extent to which they add value through either asset or security selection. This is what we term the alpha of the portfolio and estimating alpha is at the heart of evaluating an asset manager’s performance. This can be contrasted with beta – the source of systematic risks, driven by market factors. Hence asset managers on the buy side, use VaR like measures just like their colleagues on the sell side. The difference being that they focus on relative measures of risk so called TaR measures (tracking error at risk) which capture variation of returns relative to some predefined benchmark. The sell side typically focuses on absolute return volatility (traditional VaR measures). Also depending on the type of asset manager (e.g., alternative investors), they often have longer investment horizons, and are more concerned with liquidity risk issues.

Central Counterparty Risk Management - No Silver Bullet

The credit crisis has pushed OTC derivatives and securities financing across the world to move to a centralized counterparty (CCP) approach where a single counterparty assumes the settlement and pre-settlement risks of its members. Recent changes to the Basel II accord, namely capital reductions for transactions with CCPs, are fueling this trend. But beware – there is a catch. Moving to a CCP only makes sense if the CCP has a higher credit rating and superior risk management capabilities than the counterparties it is servicing.

But what does it mean for a CCP to perform effective risk management? I think it boils down to a few basic things that the CCP say an exchange, must get right.



These summarize many of the recommendations of the BIS/IOSCO committee back in 2004. It starts with ensuring that clearing members have adequate financial resources to back up the CCP in the event of crisis -- Credit worthiness criteria for new members is crucial as is the ability of the exchange to monitor clearing member positions and prices in real time The primary tools that the exchange uses are margins, initial margins and variation margins. These function on the principle that the defaulter pays by providing margin periodically usually based on some measure of the pre-settlement risk of their exposure. This might be measured by a VaR calculation over the settlement period. The variation margins are typically dynamically adjusted to capture changing market conditions and positions. Daily margin adjustments to a large extent prevent individual clearing members’ defaults having knock-on effects to the other exchange members. Then Back testing can be used to check just how effective the margin measures are in capturing actual members losses over time. Stress testing can determine the default fund, essentially a capital fund used to protect the exchange in the event of extreme shocks beyond the margins, for example by looking at the impact of the two largest members exposures defaulting simultaneously. Of course there are many other risks faced by a CCP, investment risk (of all those margin payments), custodian risks, operational risks, legal risks and so on.

The question it seems to me is whether there are scale economies in managing the counterparty credit risks (probably true), and whether existing CCPs are able to show competence in managing the residual risks (this is less clear, particularly in the light of potential systemic risk and the potential cost to taxpayers of having to pick up the pieces). What do others think?

Tuesday 16 February 2010

What do we know about Liquidity Risk?

Liquidity is often described as the new frontier in academic research on financial markets. From a practitioner perspective, financial crises such as the 1997 Asian crisis and the 2007/2008 global financial crisis, have reminded market participants of the importance of taking into account liquidity as a factor when evaluating investment opportunities and when designing risk management systems. But how much do we really know about Liquidity Risk? How do we measure it? How do we manage it?

Liquidity Risk comes in two species – funding risk and market liquidity risk. Each operates at a different level of analysis, the first is liquidity risk at the level of the institution, how do we ensure that an organization (like a bank) can survive the variable inflows and outflows of cash over a particular time period. Such Funding Risk is really an outgrowth of ALM, and seeks to design the balance sheet to be robust when facing sudden outflows of cash. Techniques like cash flow gaps, cash flow forecasting, use of liquidity reserves, contingency planning, crisis management, etc. are all key components of funding risk.

Market liquidity risk is at the level of individual assets, and measures our inability to convert assets into cash in a reasonable timeframe. There are many ways to model market liquidity. Some extend the traditional VaR approach into an Liquidity VaR model based on the bid ask spread, using either an empirical or a theoretical distribution to derive an add-on to the VaR that incorporates the potential change in spread. Others use regression models trying to relate historical price changes to volume changes (after adjusting for all the usual Fama – French beta factors). A third approach looks at limit order books and tries to infer the embedded liquidity in those reserve prices for off market orders. A final stream of research looks at the optimal trading strategies associate with illiquid assets – in the presence of illiquidity and other transaction costs how do bring down my position in illiquid assets within a certain time period.

Although strictly speaking, not liquidity risk per se, another stream of research is hard at work extending trading pricing models (e.g., CAPM) to incorporate systematic illiquidity as a source of asset returns and typically argues that what is often viewed as a source of alpha, is really poorly measured and understood beta in the form of an illiquidity premium.

On the Value of Historians as Risk Managers?

One of the casualties of the recent crash has been the decline in the credibility of the “quant”. This may be no bad thing. Although I certainly do not advocate giving up on quantitative methods, which are here to stay in a world of derivatives, electronic trading and algorithmic trading, I do believe that other disciplines have much to add to the analysis and management of risk. For example, historians. I think of some of the recent work on the impacts of historical banking crises e.g., a lot of the work done by Kenneth Rogoff. Also I think historians offer a natural antidote to the abstractions of modern risk management. History is about specific outcomes, not about abstract concepts or distributions. History is messy and full of unintended consequences. It is about case studies rather than theory. It is accessible to managers who can project themselves into the characters of history. Good managers add value partly because (like grandparents) they have seen tail events before, and know where the nearest bolthole or watering hole might be found, Its interesting to note that historians rarely think about distributions or theoretical models when trying to hesitantly extrapolate into the future (even if they dare). If they do, they don’t base their assessments on recent events but instead long term cultural, demographic and geopolitical forces which might not move this market one way or another but do fundamentally shift the actors (usually governments and to a lesser extent corporations and markets) into one direction based on assessments of their own best interests. Maybe we should have corporate historians as non executive directors!

Time and Space... And Risk

Have you ever thought that risk management always operates in a context, and that context is determined by our sense of space and our view of time. Consider…Space is a buffer that allows us freedom from worrying about risk. That space might be geographical space – think of the luxury provided to the US of having two large oceans between them and any potential enemy, or it might be any buffer that can mitigate the effect of shocks. Consider any inventory, and how it shields a corporate from shocks in supply or in demand. It may be any shared space of common beliefs that facilitate action. For example, space might be an ideology like a religion or like capitalism. A market is an example of a shared space that limits the things we need to worry about (the risks) precisely because it defines the rules that determine how we interact.

Time is more subtle from a risk perspective. We infer risks from the past and extrapolate into the future. We see actual outcomes historically and think we can infer future distributions of potential events over a particular future time horizon (this is a so called frequentist view of statistics). At the very least we look at the past and infer priorities for future action.

It’s interesting I think to realize that we don’t simply have one horizon in space nor in time. For example, we have short term horizons for say monetary policies and long term horizon for fiscal policies. We have one day horizons for liquid assets and one year horizons for changing our suppliers or our technologies. Governments operate in the space of nation states but they also struggle with markets, and ideologies just as much.

Risk is the unexpected, and the unexpected breeds where differences in perspectives on time and space abound.

Monday 15 February 2010

Greece and other PIGS

The problem of Greece is really a classical financial crisis. Boom times and cheap money from abroad have encouraged uncontrolled lending in the economy. And of course the other side of foreign capital inflows is a huge budget deficit, which will make it difficult to adopt a Keynesian tax and spend approach to getting out of the crisis. Nor can Greece do much of a monetary solution to its problems, being part of the Euro zone. Budget cuts, higher interest rates, greater unemployment seem inevitable, if politically unacceptable, and of course a political reaction might lead them out of the Euro zone altogether putting in jeopardy the whole European expansion project, as the contagion passes to other PIGS (Portugal, Italy, G. and Spain). Whereto then i wonder? Of course the markets are already forecasting these changes, shorting the euro, jacking up long term interest rates and credit spreads, with the sad effect of making inevitable the very crisis that they predict.

The Economist's View

The Economist presents a smorgasbord of reasons for the recent financial crisis without firmly settling on any one as crucial. Still it's a good overview of the themes and some of the limitations of quantitative risk management.

http://www.economist.com/specialreports/displaystory.cfm?story_id=15474137

While we're at it, they have a good video on the dangers of fat tails and stress correlations...http://ow.ly/16Gel

My feeling is that although the limitations of risk measurement contributed to making a bad situation worse, particularly by giving a false sense of security to people who should have known better, it would be unfair to throw the baby out with the bathwater. Models are like walking sticks, they help you get around, but they don't do the walking for you.

Piano Tuners?

OK -courtesy of Wikipedia here goes one answer
1.There are approximately 5,000,000 people living in Chicago.
2.On average, there are two persons in each household in Chicago.
3.Roughly one household in twenty has a piano that is tuned regularly.
4.Pianos that are tuned regularly are tuned on average about once per year.
5.It takes a piano tuner about two hours to tune a piano, including travel time.
6.Each piano tuner works eight hours in a day, five days in a week, and 50 weeks in a year.
From these assumptions we can compute that the number of piano tunings in a single year in Chicago is

(5,000,000 persons in Chicago) / (2 persons/household) × (1 piano/20 households) × (1 piano tuning per piano per year) = 125,000 piano tunings per year in Chicago.
We can similarly calculate that the average piano tuner performs

(50 weeks/year)×(5 days/week)×(8 hours/day)/(1 piano tuning per 2 hours per piano tuner) = 1000 piano tunings per year per piano tuner.
Dividing gives

(125,000 piano tuning per year in Chicago) / (1000 piano tunings per year per piano tuner) = 125 piano tuners in Chicago.

How many piano tuners in Chicago and other Fermi problems?

What's that to do with risk management? What's a Fermi Problem? According to Wikipedia, a Fermi problem is an "estimation problem designed to teach dimensional analysis, approximation, and the importance of clearly identifying one's assumptions". It is named after physicist Enrico Fermi, such problems typically involve making justified guesses about quantities that seem impossible to compute given limited available information. Long ago when I went to an interview at Cambridge, the professor posed one such Fermi problem to me - how many piano tuners in Chicago? He did not about the answer, He cared about the logic of the process by which i produced the answer. The same's true of risk measurement. Should we be more worried about the collapse of the EU or about an assasinated president? An 200 basis point move in interest rates or a drought in the mid west? One of the themes of this blog will be the need to use heuristics to guesstimate results rather than complex models.

And by the way, the answer is - there are 1523 piano tuners. Just kidding - i made it up!

Welcome...

Welcome to my new blog. Risk Management it seems to me, like much of modern society has become siloed and the domain of specialists focusing on one or other narrow technical issue to the exclusion of the more potentially devastating risks that could derail our world, our economies, our businesses and our lives.
Enjoy!