By Barry Weinstein, Founder and CEO of VolatilityFX
Why even in the information age, you cannot make above risk-adjusted returns speculating in the financial markets.
Financial markets are efficient without exception. There is no way for a speculator to generate supernormal profits without undertaking supernormal risk. Even at the highest levels of finance, traders endowed with decades of experience, millions of dollars of technological infrastructure, and supporting teams of quants and developers cannot generate above risk adjusted returns, and oftentimes underperform passively managed tracker funds. Michael C. Jensen showed the relative disadvantage to day trading because of the market’s apparent randomness but with the fixed existence of transaction costs. Numerous studies have been conducted on active fund management vs. passive fund management and showed that actively managed funds tend to underperform indices by roughly the accrued transaction costs incurred during the period (Jensen, 1968). Even so, in the information age where futuristic methods such as machine learning, optimization, and artificial intelligence are widespread and available, can a new wave of “high tech traders” finally beat the market? The answer is an emphatic NO!
Market Efficiency. What does it mean?
The Efficient Market Hypothesis was formalized by Eugene Fama in a literature review of empirical work on the subject of portfolio management in 1970. The conceptual innovation of the Efficient Market Hypothesis surmises that markets have no memory of past events and tend to move in brownian motion. Brownian motion can be interpreted as a “drunkard’s walk”, where the direction of movement of an asset price can be described with the predictability of a coin flip. The asset will move randomly up and down in increments over a specific time frame. The result looks like unpredictable wandering without large gaps in distance between steps.
Fisher Black offered a second definition of an efficient market at the 1985 annual meeting with the American Finance Association when he asserted, “We might define an efficient market as one in which price is within a factor of two times the value; i.e., the price is more than half the value and less than twice of value. By this definition, I think almost all markets are efficient almost all of the time.” Black adds “‘almost all’ means at least 90 percent.”
Princeton Economist, Burton Malkiel offered a third definition. The author of “A Random Walk Down Wall Street” defines the concept of an efficient market as, “a market that does not allow investors to earn above-average returns without accepting above-average risks” (Malkiel, 2003).
There are clear problems with applying the concept of a drunkard’s walk, or series of coin flips to financial markets, as there are periods, such as the US equities market in the 1990’s dot-com boom where the equities market made stable positive gains for years on end and nearly without disruption. Between 1995 and its peak in March 2000, the Nasdaq Composite stock market index rose 400%, only to fall 78% from its peak by October 2002: completely erasing the boom. Eugene Fama’s description of efficient markets does not sufficiently describe the disparate stable behaviors of independent markets. Lo and MacKinla used statistical techniques to prove that the continuous successive upward and downward movements known as “trends” were too many and occurred too often to have been a result of brownian sequences. (Lo and MacKinla, 1999) Similarly, Lo, Mamaysky and Wang demonstrated that certain patterns in technical analysis did have predictive ability in relation to future securities prices. (Lo, Mamaysky, Wang, 2000). Lev and Thiagarajan show that fundamental analysis can forecast future earnings reports and then predict future asset prices with a fair amount of accuracy (Lev and Thiagarajan, 1993).
Black’s definition of efficient markets inherently relies on the concept of a fundamental, or objectively fair price of an asset, which is to be determined or estimated by teams of accountants, and analysts who form their own scientific estimates of value (Black, seen in Mehrling, 2005). Black’s definition implies that the fundamental value of a security can be used (in rare circumstances) to exploit mis-pricings in the market which have valuations deemed unreasonable by teams of analysts. Black’s definition does not take into account that predicting future asset prices with a high probability does not necessary in result in the ability to generate above risk-adjusted profits.
Most of the US equity traders participating in the dot-com bubble had a fair idea that the market would be positive the next day. Most of these traders, without any software, or market acumen, were able to more or less accurately predict the diffusion of the US equities market for 5 years. Even so, one statistical outlier, such as the dot-com bust, can reverse the gains of a successful multi-year strategy. The “most statistically accurate” traders suffer the worst losses while the “most statistically inaccurate” traders are made an overnight success. From a fundamental value perspective, there is also the problem of a theoretical equity whose valuation is five times it “fair” price. When the market valuation was four times its fair price, before reaching its peak, if a trader had sold short due to model recommendations, the trader would have lost money despite being “right.” The exuberant trend follower, who exclaims “it’s the hot stock to own right now!” would conceivably make money despite being “wrong.”
It is the position of this article that Malkiel’s definition of an efficient market holds the most weight. The risk adjusted return viewpoint does not imply that assets even trade near their fair value, as Black implies. Malkiel also does not suggest that markets behave randomly as Fama implies. USD/SAR and USD/HKD, for example, are clearly not random given their respective currency pegs, and yet they are some of the most difficult markets to extract returns from. It would be an expensive and academic exercise to start trading SAR and HKD based on their respective “fair values” as long as their pegs are enforced.
Intelligent Design, or Natural Selection: Who decides the price?
Friedman asserts that market selection pressure would eventually result in behavior consistent with the maximization of general equilibrium, and those who behave irrationally will be driven out of the market by those who behave as if they are rational (Blume and Easley, 2006). This proposed phenomenon is known as the Market Selection Hypothesis (Alchian, 1950). Of course, Friedman assumes that rationality alone is sufficient make above risk adjusted return in the markets. DeLong, Shleifer, Summers, and Waldmann, formally analyzed profits of noise traders and rational traders and found that the noise traders had a higher expected value than the rational traders. They argued that irrationally overconfident noise traders have the ability to be the primary influencers of asset prices in certain asset classes (DeLong, Shleifer, Summers, and Waldmann, 1991).
The market’s ability to take low quality information from thousands of noise traders and synthesize a market clearing rate that maximizes the general equilibrium given environmental constraints is nothing short of a miracle. Technological innovation has drastically reduced operating costs associated with market participation, but there are limits to what science can achieve with regard to speculation and asset pricing. There is no network of computers, however powerful, that can calculate a market clearing rate with greater efficiency than performed by the price mechanism. The Efficient Market Hypothesis, even given advanced technological innovation, stands.