The practice of statistical analysis sure has changed in the 30 years since I was in grad school. At that time, the field was deductive and mathematical, driven by probability theory. I often struggled with the theory, frustrated by the arduous mathematical proofs. We used computers to get answers, but only after we'd resolved the math. Now computing is front and center in the statistical world. Inductive simulation and Monte Carlo techniques driven by computer power share the statistical stage with the probability theory. Indeed, computational statistics is now an important area of statistical inquiry. And statistics is converging with the machine learning focus of computer science to produce exciting developments in data mining or knowledge discovery in databases. Statisticians provide the theory, models, and data analysis expertise; computer scientists, the new mining algorithms and techniques for handling large data. Done well, this marriage could be a bonanza for the high-end analytics branch of BI.

I'm a big advocate of the Monte Carlo (MC) approach for BI. Basically, MC simulation uses computer-generated pseudo random numbers to help induce approximate solutions to very complicated problems that might be impossible to derive mathematically. Statistical practitioners can now often finesse knotty calculations with unlimited iterations of random numbers. Several years ago, I adopted the now popular MC-related *bootstrap *as a staple in my BI arsenal. The bootstrap is a data-driven simulation method used for statistical inference popularized by Bradley Efron of Stanford. The technique derives from the phrase “to pull oneself up by one's bootstrap”, thought to have originated in *The Adventures of Baron Munchausen, *by Rudolf Eric Raspe*. *To bootstrap has come to mean to make do or even thrive with the situation at hand – to effectively play the cards that were dealt. For statistical inference, those cards are the sample, when conclusions are desired about a population. If, for example, the only information one has to compute the population mean is from a sample of size 500, the bootstrap uses that sample as a proxy for the population, then resamples with replacement from that sample many times -- as if it were the population -- and computes the means of those resamples. If the sample is a good representation of the population, then the distribution of a large number of resampled means can be used to make inferences about the population. And of course, this resampling exploits the random number generation and simulation capabilities of modern statistical software like open source R (http://www.r-project.org/).

I used the bootstrap method to examine portfolio returns from the same French data (http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html) reported in the column *Statistical Graphs for Portfolio Returns* of 11/16/2007. I first assembled data from 8 portfolios, the risk free asset and 7 stock (Small Growth, Small Neutral, Small Value, Large Growth, Large Neutral, Large Value, Market). For each portfolio, there are 11,138 daily percentage change figures (roughly 44 years of about 252 trading days per year from 7/1963 through 9/2007). From each stock portfolio, I formulated two additional portfolios: the first, a combination of 75% of that stock portfolio and 25% of the risk free asset; the second, 25% of the stock portfolio and 75% of the risk free asset. These new portfolios probably better approximate how investors actually behave in practice – combining risky stocks with more placid bonds and money market funds. One would expect the portfolios that combine stock and the risk free asset to be less “volatile” than stock alone, especially in the short run.

With 44 years of daily data for 8 actual and 14 computed portfolios, I was ready to try the bootstrapping technique. Statisticians generally settle on 1000 or even 10,000 iterations of resampling for bootstrap calculations. I let my notebook run through the night, resampling 100,000 times the returns of each portfolio for 1 year, 3 years, 5 years, 10 years, 15 years, 20 years, 30 years, and 40 years using the R statistical package. The longer return periods are obviously more compute-intensive, since the resamples must include an estimate for each day. With the 10 year returns, for example, I sampled with replacement 252*10 of the 11,138 percentage change figures 100,000 times for each portfolio. Fortunately, R's architecture is optimized for array processing. Sensitive to storage requirements, I then computed the resampled growth of $1 returns, consolidating the 100,000 estimates into a percentile distribution for each time period portfolio. Once I had amassed all the bootstrapped calculations, I used R's programming features and lattice graphics functionality to display the results.

I include the 1, 5, and 20 year work-in-progress statistical graphs to demonstrate the results of analyses. Below are the 1 year returns. Each panel of the overall graph, ordered by performance from left to right, details resampled returns of one or more portfolios. The panels for each of the stock portfolios shows results of pure stock (blue), 75% stock and 25% risk free (brown), and 25% stock and 75% risk free (green). The x axis is the consolidated percentile distribution ranging from 0 to100. The y axis depicts growth of an initial $1 invested, using an exponential rather than an arithmetic scale to handle the multiplicative nature of over-time returns. The intersection thus shows the percentile of a particular return, interpreted as the percentage of cases where returns were *less than or equal* to the one noted. The gray vertical lines are at percentiles 2.5, 50, and 97.5 respectively, so that 95% of simulated returns fall between the anchor bars of each panel – an area of critical importance for statistical interpretation. The horizontal red line indicates the break even point of an initial $1 investment; the gray horizontal bar depicts the best risk free return – and “sets the bar” for evaluating portfolio performance.

Click on image for full size version.

As we'd anticipate, all risk free returns are positive, with the 100 percentile intersecting the “to beat” bar for stock portfolios. It appears from the Small Growth panel that about 65% of pure stock portfolios are in the black, but only 55% or so beat the risk free standard. On the other hand, the Small Growth 75% bond portfolio seldom loses money, but trails on the upside – as might be expected. The 25% bond portfolio curve fits between those extremes. With a one year investment horizon producing a 1/3rd chance of losing capital with a pure stock portfolio, I think I'd pass on Small Growth. On the other end, pure Small Value is in the black more than 80% of the time, beating the risk free target in excess of 70%. The 75% bond Small Value portfolio appears to have positive returns in most cases, beating the risk free standard over 85% of the time. A much better risk, it would seem, for a short term investor.

The 5 year return graph shows a more typical investment horizon. Note that the pure stock Small Growth portfolio still has a reasonable chance of being in the red, but the 75% bond variant is safe. All three Small Growth portfolios appear to beat the risk free standard in about 2/3rd of the cases. Small Value is starting to differentiate at 5 years, with all 3 portfolios besting the risk free return over 90% of the time. Some returns are extreme (outside the 95% range): there's a lowest percentile of Small Growth at about $.35, and a highest of Small Neutral of over $8.00!

*Click on image for full size version.*

With a 20 year horizon, there are provocative differences in returns by portfolio style. While the risk of losing capital is negligible, Small Growth portfolios still lag the risk free standard 25% of the time. The 20 year median growth of $1 for a pure Small Growth portfolio is about $6. Small Value, by contrast, exceeds the risk free standard over 95% of the time, generally by a significant amount. The median growth of $1 for pure Small Value is over $20. Little wonder we need an exponential returns scale!

*Click on image for full size version.*

Finally, I show a dot plot that summarizes information over time for the 7 pure stock portfolios. The red dot denotes the percentage of samples the given portfolio is in the black for the time period. The green dot shows the percentage of time the portfolio bests the risk free standard. Small Value, Small Neutral, and Large Value appear to be good bets by these criteria.

*Click on image for full size version.*

I'm not sure how Professor Efron would react to the use of the bootstrap for evaluating portfolio returns. But simulation is the basis for many of the retirement calculators offered on the Web by financial services providers. When those calculators report that a couple of a certain age, financial needs, spending patterns, and existing portfolio allocation has an 80% chance of living comfortably till the end, they are essentially noting the percentile results of resamplings like those reported here. Of course, their scenarios are more complex, accounting for spending, portfolio expenses, inflation, etc.

As noted in *Statistical Graphs for Portfolio Returns*, there are several serious reservations to be acknowledged with this type of analysis. The first and most important is that the sample of returns used for the bootstrap might be a bad proxy for the population. Financial planners continually chide that current results do not predict the future. Our methodology assumes that future returns will look like those of the last 40 years, though experts like Warren Buffet and John Bogle warn that investors should expect lower stock returns going forward. Investors would not see the wealth accumulations noted here under any circumstances, since the calculations don't account for the significant drain of expenses, often 1.5% or more per year. And investors need to be wary of judging performance of the different portfolio market styles. A “Small Value” from Fund X might look very different than the version in this analysis. Finally, the performance of market styles runs in cycles. Large and growth dominated in the late 90's, with small and value on top from 2000-2005. It now seems that large and growth are again attractive in the market. An investor inclined towards small and value should therefore have a long-term horizon.

I think it's probably still safe, however, to note the salutary effects of time on wealth accumulation demonstrated in this analysis. Even with less distance between stock and risk free returns, the impact of appreciation, reinvested dividends, and compounding should assure with a high probability that for 15-20 year holding periods, stock portfolios will outperform the risk free asset. The closer one's investment horizon is to that time frame, the more the portfolio should be tilted towards stocks. For shorter time periods, a larger proportion of the risk free asset can help dampen the volatility of returns.

**References**

1. Bradley Efron and Robert J. Tibshirani. *An Introduction to the Bootstrap.* CRC Press LLC, 1993.

2. David Ruppert. *Statistics and Finance, An Introduction*. Springer-Verlag. 2004.