Isolating Specific Returns

Quantopian Community,

I have got something really special in this algorithm, unfortunately it is dragged down by it's common returns. Is there a way to isolate only the Specific Returns. If possible could you share a code snippet that achieves this using the risk model? A constraint etc?

I have an annualised specific return of 33% (which gives a Sharpe of 7.2), with an annualised common return of -21%.

63
Notebook previews are currently unavailable.
78 responses

I know this has been asked before, but an answer wasn't really given

A good starting point would be to check out the RiskModelExposure constraint, which you can pass in to order_optimal_portfolio. Here's an example of it in action:

from quantopian.pipeline.experimental import risk_loading_pipeline
import quantopian.optimize as opt

def initialize(context):

def place_orders(context, data):
# Constrain our risk exposures. We're using the latest version of the default bounds.
constrain_sector_style_risk = opt.experimental.RiskModelExposure(
)

order_optimal_portfolio(
objective=some_objective,
constraints=[constrain_sector_style_risk],
)


By default, RiskModelExposure will place an 18% constraint on sector exposures, and a 36% constraint on style exposures. You can tweak the exposure limits to provide a different exposure cap for certain factors - for example, if you pass max_industrials=0.1 into the RiskModelExposure constructor, it'll cap your long industrial exposure at 10%. This post has a good example of the RiskModelExposure constraint in action.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi @Abhijeet Kalyan,

I have already applied these filters. What I don't understand is how my Average Factor exposures can be zero (or as close as makes no difference) but I am losing money consistently from these common style returns?

Alternatively, is there a way I can view common returns on a chart by itself (or specific returns) so I can work out what the nature of the losses are to see if I can synthesize a hedge against my common return losses?

 constrain_sector_style_risk = opt.experimental.RiskModelExposure(
min_momentum = -0.01,
max_momentum = 0.01,
min_short_term_reversal = -0.01,
max_short_term_reversal = 0.01,
min_value = -0.01,
max_value = 0.01,
min_size = -0.01,
max_size = 0.01,
min_volatility = -0.01,
max_volatility = 0.01,
)

order_optimal_portfolio(
objective=objective,
constraints=[
constrain_gross_leverage,
constrain_pos_size,
market_neutral,
sector_neutral,
constrain_sector_style_risk,
],
)


Here in these two sections I should be limiting my factor risk. However, when I look at the notebook I generate off the back of this, I notice that I am still losing a lot of money in common returns. How can I stop this from happening?

4
Notebook previews are currently unavailable.

Or, even better, is there a way to flat out exclude financial services from the mix? This would cut out the majority of my losses.

To view your specific returns (or common returns) in isolation, you could use the attributed_factor_returns property on the backtest object. This would get you a dataframe of your daily (not cumulative) attributed returns, which you can use to isolate a specific return stream - for example, bt.attributed_factor_returns['specific_returns']. We're also working on changes that will allow you to view your specific and common returns directly in the backtest UI, so stay tuned on that front!

For limiting your financial services exposure, maybe adding in tighter-than-default min_financial_services and max_financial_services values to the RiskModelExposure constraint, like you did for the style factors, might help?

How is common returns calculated? Can financial common returns be hedged against with a long/short weighted XLF position, or is that not how the common returns work?

And if it is the case that common returns be hedged against with a long/short weighted XLF position, is there a way to see yesterdays financial common returns inside the algorithm and then use it to weight said position?

i.e.

order_target_percent(XLF, -financial_services_factor_exposure(1 day ago))

@Quant Trader I took a look at the performance attribution of this algo. It has an alert, "This algorithm has a relatively high turnover of its positions. As a result, performance attribution might not be fully accurate." I suggest that you use return-based performance attribution instead of using this position-based performance attribution.

For a return-based performance attribution, you could regress the algo daily returns on the daily common risk factor returns with combined L1 and L2 priors as regularizer. (http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html#sklearn.linear_model.ElasticNet)

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hello @Rene Zhang,

Thank you for your help, I managed to resolve the problem by making the algorithm run more than once per day.

 for i in range(1, 300, 5):
schedule_function(get_factor, date_rules.every_day(), time_rules.market_open(minutes=i))
schedule_function(allocate, date_rules.every_day(), time_rules.market_open(minutes=i+1))


This had a double benefit, it reduced the common returns losses and simultaneously increased specific returns. Unfortunately it isn't eligible for the Quantopian Open because it has a daily turnover of 1885.0%.

7
Notebook previews are currently unavailable.

@Quant Trader, those are really impressive numbers.

Would you be so kind as to re-run your presented backtest full tear sheet with the round_trips option turned on, as in:

bt.create_full_tear_sheet(round_trips=True)

Interested in viewing a few of those numbers. Thanks.

Hi Guy,

Is there a way to do that without generating the entire rest of the backtest? The notebooks been running for the last hour and hasn't finished yet, maybe it's due to the high turnover?

Alternatively, would you be happy for me to run the same backtest, but over a shorter time period (07-17 to 03-18) in order for the notebook to actually get somewhere?

Your strategy is generating alpha, it appears sustainable since your cumulative returns log-chart has an increasing spread above the SPY benchmark. And that is the alpha that one should be looking for.

It's the processing which is taking the time, I've just reused the same backtest I loaded in earlier, but the section where I pasted in

bt.create_full_tear_sheet(round_trips=True)


has had the little star in the top left corner for the last hour and a bit.

Then, not enough memory !

Memory peaked at 31% usage. I'm just going to run a shorter backtest with a shorter refresh rate to try and slow it down. The returns may be slightly different, but the gist should still be the same

That should allow the round-trip analysis to load in faster hopefully.

Hi Guy,

At long last (the thing has literally been running for the last 3 hours). Here is the round trip analysis of a simplified version of the algorithm (had to slightly reduce rebalance rate to get this thing to process it this side of 2018 : P)

Hope it shows you what you want to see.

8
Notebook previews are currently unavailable.

@Quant Trader, those are fantastic numbers. Great job. Impressive.

Who would ever want some 5-10% return after seeing numbers like those.

Add a little leverage (5-10%), not much. Allow a slight positive market exposure (5-10%), and you should see your strategy fly even more. The added performance will cover the small leveraging fees. Doing this, your alpha will turn exponential.

The chart of importance is the cumulative returns log-chart which sees the alpha spread smoothly increasing with time. Showing that your trading strategy does generate meaningful increasing positive alpha.

The only question I have is: the Performance Relative to Common Risk Factors section gives positive results while all the cumulative return factors are negative. How could they add up to something positive?

@Guy Fleury,

The only question I have is: the Performance Relative to Common Risk Factors section gives positive results while all the cumulative return factors are negative. How could they add up to something positive?

I'm not entirely sure about this to be honest, you'll have to ask Quantopian how that works, I would assume it's something to do with the specific returns I'm generating which are independent of common factors?

Thanks for your suggestion about leverage by the way, it would seem it disproportionately benefited the algorithm (not entirely sure how this is the case)

6
Notebook previews are currently unavailable.

The real alpha illustrated in your cumulative returns log-chart is compounded over the entire period. And since you are using a percent of equity betting system, any extra money made available will be put to use by increasing the bet size.

Which brings me to: would have liked to see: round_trips=True to compare the numbers.

Somewhere in the Performance Relative to Common Risk Factors section there is something wrong. Someone from Q should provide answers.

Hi Guy,

https://www.quantopian.com/posts/new-tool-for-quants-the-quantopian-risk-model

Components of the Quantopian Risk Model

The deliberate, careful design of a risk model codifies a particular view of the market. The Quantopian Risk Model is designed to identify the particular risk exposures that are desired by our investor clients.

The risk model consists of a series of cascading linear regressions on each asset. In each step in the cascade, we calculate a regression, and pass the residual returns for each asset to the next step.

Sector returns - Our model has 11 sectors. A sector ETF is specified to represent each sector factor. Each stock is assigned to a sector. We perform a regression to calculate each stock's beta to its respective sector. A portion of each stock's return is attributable to its sector. The residual return is calculated and passed to the next step.

Style risk - We start with the residual from the sector return, above. We then regress the stock against the 5 style factors together. The five styles in the Quantopian risk model:

Momentum - The momentum factor captures return differences between stocks on an upswing (winner stocks) and the stocks on a downswing (loser stocks) over 11 months.
Company Size - The size factor captures return differences between big-cap stocks and small-cap stocks.
Value - The value factor captures return differences between expensive stocks and in-expensive stocks (measured by the ratio of book value of company to the price of the stock).
Short-term Reversal - The short-term reversal factor captures return differences between stocks with strong losses to reverse (recent loser stocks) and the stocks with strong gains (recent winner stocks) to reverse in a short time period.
Volatility - The volatility factor captures return differences between high volatility stocks and low volatility stocks in the market. The volatility can be measured in historical long term or near-term.
Once the sector and style components have all been removed, the residual is the specific return.

From this I'm pretty sure that the explanation for my returns is most of the returns I generate is independent of the common factors (i.e. it is all residual).

@Quant Trader, I would think not. It would mean inversely correlated to all 16 factors.

If such was the case, it would give your strategy an even much higher comparative value.

Have you tried the: round_trips=True thing? Still curious to see the numbers.

Here you go, same simplified algorithm I used earlier last time I showed you the round trips, but with 1.1x leverage this time

6
Notebook previews are currently unavailable.

The increase in returns I would still describe as disproportionate. The percent profitable figure has increased by 1% from 55% to 56%, which I don't understand considering it's the exact same strategy.

I also don't understand why it can't be the case that my algorithm is inversely correlated to the 16 factors, does the evidence not point to this being the case?

@Quant Trader, there is an explanation for this which you find in the notebook:

Performance attribution is calculated based on end-of-day holdings and
does not account for intraday activity. Algorithms that derive a high
percentage of returns from buying and selling within the same day may

That completely slipped my mind, thanks for pointing that out!

I still don't understand how the leverage is influencing the percent profitable figure though?

@Quant Trader, like I said before, the alpha in your strategy is being compounded due to your method of play, albeit at a low rate, but still compounding. I have a formula for that somewhere. You have an expanding bet size which also help.

My questions are now centered around feasibility and sustainability.

Take your first notebook, its achieved CAGR rate is remarkable, outstanding.

It had a 567% CAGR. At that rate, a $10M portfolio would grow to over$130 billion in 5 years. By year 10, it would reach over $1 million billion. So, sustainability and feasibility are now the proper questions to ask. And because of those questions, what could you do to make it doable anyway, even if it is to a lesser extent? Planning what you want your strategy to do over the next few years will help you compensate for those issues, and improve your trading strategy even further. The first thing to do is to do it with high market cap stocks only, at the moment it is running on the QTradableStocksUS() and simply picking the best stocks available, if the alpha remains it's still a viable strategy. No one expects it to be able to achieve the 567% number, that would be remarkable (and impossible). What I'm going to do is live trade it with some demo money for a period of time to test it out of sample. I might also vary the base universes over an over again (small cap, mid cap, large cap, high pe, low pe etc) in order to test that there is actually something beyond over-fitting at play here. The logic behind the algorithm makes mathematical sense (in my mind at least) but it may just be a case of fortunate environments. If I really wanted to put it to the test, a Monte Carlo system with a completely randomised universe, full of randomly generated time series, which can be traded by the strategy could be very effective tool to test it, this works because it isn't actually trading the stocks . The one I ran with large cap stocks has been placed here, obviously something still exists, but the returns are far more realistic, it's still not something to be sniffed at though: 4 Loading notebook preview... Notebook previews are currently unavailable. I would say the factor still exists. There are a few problems however, random leverage spikes. High negative factor exposure to size. Largely negative exposures to all daily sector factor exposures. Probably a more realistic backtest. I also ran it over the 2008 GFC (just to see how it performed out of sample) although the drawdown was slightly larger, once again it performs well. 1 Loading notebook preview... Notebook previews are currently unavailable. Fantastic performance and returns! Just out of curiousity, is this a Machine Learning algorithm and did you make sure that there is no forward looking bias? @James Villa, This algorithm isn't based on Machine Learning, it makes use of a (as far as I am aware) self discovered factor. There is no forward looking bias to the best of my knowledge, it makes use of the QTradableStocksUS universe and doesn't make any attempt to access future data, so unless there's a problem with the Quantopian backtester I don't see how it could come in. A lot of the returns can however be attributed to small cap stocks, once those are removed the returns die off a bit, even then though it's still an acceptable algorithm (sharpe of 3 or higher depending on start date). It's returns really pick up when it's allowed to pick the best stocks in the universe it can. @Quant Trader, Thanks for your reply and info. With small caps, you can run into problems with shorting and liquidity. The one thing that strikes me the most, is how the returns are negatively correlated with all 16 factors. @James Villa I think what has caused that is the fact that this strategy is intraday. As @Guy Fluery pointed out: Performance attribution is calculated based on end-of-day holdings and does not account for intraday activity. Algorithms that derive a high percentage of returns from buying and selling within the same day may receive inaccurate performance attribution. It may not be the case that it's actually negatively correlated with all factors. Of course, it would be great if it was : D did you make sure that there is no forward looking bias? I forgot to mention, I've been live trading it for about a week now (demo cash) to see if it still works, it's performing to the same standard so I think we can rule out look ahead bias. Of course, 1 week isn't a large enough sample size. The major problem I've been having is Quantopian's speed, when I increase the rebalance period (4893.2% vs ~1800% daily), I get significantly higher returns. 4 Loading notebook preview... Notebook previews are currently unavailable. @Quant Trader, and you should. This would be confirmed with: bt.create_full_tear_sheet(round_trips=True) ;) @Guy Fleury I would like to do that every time, but it takes (what feels like) a million years to run that notebook due to the sheer number of positions it takes : D @Quant Trader, there were two words of importance in my post. The first: should, as if it was evident that it should do so. The second: confirm, as if in yes it does. I would go for the answers. You should want to know what are the limits. How far can you push it which in a way will set your limits. Then, you can pull back to a level where you feel more comfortable (risk wise that is). IMHO. I've spent some time tinkering with the base algorithm, this is the final product. I'm very happy with it, I've sacrificed the outlandish returns for greater security, the greater security comes in the form of significantly reduced common returns losses, periods at which the factor exposures and style factor exposures are basically 0 and a (almost) consistent beta of 0.0. What I found particularly interesting though is that the Annual returns also seem to be increasing exponentially, implying this is not only a source of alpha, but is a source of alpha which is becoming far easier to exploit. 7 Loading notebook preview... Notebook previews are currently unavailable. Quant Trader, I am intrigued - would you be willing to share what stock universe you are using (QTradableStocksUS?) and what your assumptions are for trading costs / slippage? It has been highlighted on this forum previously that you have to be very careful with HFT algos on Quantopian owing to the difficulty of accurately modelling the bid/ask spread (and, indeed, that it is possible to write an algo which is seemingly astronomical but in reality is simply catching the bid/ask). If the algo is robust to universe changes (i.e. focusing on tradable and highly liquid stocks) and is scalable (produces a similar curve at$10m+ starting capital) whilst also incorporating conservative assumptions for slippage, you may just be on your way to becoming a billionaire.

Also, just to note, I would imagine that the "almost exponential" profile of the curve is more likely a compounding effect than the alpha source becoming easier to exploit over time.

Will

@Will van Es

Slippage: set_slippage(slippage.FixedBasisPointsSlippage())
Commission: (Nothing inputed here so the Quantopian default)

Return Curve: Similar curve at $10m+ but begins to tail off at higher values ($100m+)

When I was referring to the exponential growth in annual returns, I wasn't talking about the cumulative return curve, but the annual returns section lower down (between monthly returns and distribution of monthly returns).

I have re-run the algorithm with only the top 100 most liquid stocks allowed (filtered daily), the returns stay, but not necessarily as smooth (4.9 Sharpe vs 11.4 Sharpe).

@Guy Fleury,

As you've always asked for round_trips, here's a notebook with that included.

16
Notebook previews are currently unavailable.

@Quant Trader, outstanding equity curve. I especially liked the 0.07 Gross Leverage. You have something valuable there. Hope Point72 sees the inherent benefits of adding your strategy to their mix.

Again, well done.

Since you are doing intraday trading and the Q/Pyfolio Backtester/Metrics tools are not really set up for that,
I believe you should run the tools pvr and
pvr_chart that are in the last algo of @Blue's thread:

https://www.quantopian.com/posts/pvr

There he records max metric values on a minutely basis, and then reports the maxes daily.
Perhaps check out his other posts on the metrics subject also, as I've always found them useful.
A while ago, we tried some intraday algos, but always found measuring the results a problem on the Q platform.

Your results are great!...so good luck!
alan

2016-12-29 19:10 _pvr:108 INFO PvR 1.1042 %/day cagr 0.156 Portfolio value 53750412 PnL 3750412
2016-12-29 19:10 _pvr:109 INFO Profited 3750412 on 2695544 activated/transacted for PvR of 139.1%
2016-12-29 19:10 _pvr:110 INFO QRet 7.50 PvR 139.13 CshLw 47508344 MxLv 0.05 MxRisk 2695544 MxShrt -2695544
2017-06-30 19:10 _pvr:108 INFO PvR 1.1024 %/day cagr 0.162 Portfolio value 58075449 PnL 8075449
2017-06-30 19:10 _pvr:109 INFO Profited 8075449 on 2907004 activated/transacted for PvR of 277.8%
2017-06-30 19:10 _pvr:110 INFO QRet 16.15 PvR 277.79 CshLw 47508344 MxLv 0.05 MxRisk 2907004 MxShrt -2907005
2017-12-29 19:10 _pvr:108 INFO PvR 0.9560 %/day cagr 0.149 Portfolio value 61567479 PnL 11567479
2017-12-29 19:10 _pvr:109 INFO Profited 11567479 on 3200957 activated/transacted for PvR of 361.4%
2017-12-29 19:10 _pvr:110 INFO QRet 23.13 PvR 361.38 CshLw 47508344 MxLv 0.06 MxRisk 3200957 MxShrt -3200957
2018-03-27 21:00 _pvr:108 INFO PvR 0.9970 %/day cagr 0.154 Portfolio value 64116231 PnL 14116231
2018-03-27 21:00 _pvr:109 INFO Profited 14116231 on 3240047 activated/transacted for PvR of 435.7%
2018-03-27 21:00 _pvr:110 INFO QRet 28.23 PvR 435.68 CshLw 47508344 MxLv 0.06 MxRisk 3240047 MxShrt -3240047
2018-03-27 21:00 pvr:185 INFO 2016-07-01 to 2018-03-27 $50000000 2018-03-29 00:43 US/Pacific @Alan Coppola As far as I'm aware there is nothing here I should be worried about. PvR significantly better than realised returns and no leverage spikes (like I was kind of worried about). @Quant Trader, congratulations! And thank you for sharing your results. When you say 'live demo' trading, do you mean Q's paper-trading environment, or an independent one via a broker? If it's Q's environment, there may not be any look-ahead bias, but do you think it's possible that you've found a 'bug' in their trading simulation, giving you unrealistic fills that you wouldn't necessarily get in the real live market (e.g. being able to buy the bid and sell the offer w/o any 'takers' crossing the spread)? Also, you don't by any chance have a negative value for the 'commission' and/or 'slippage' costs? Sorry, but I had to ask. :) I do hope you've found a true alpha factor that no-one else knows about, but it does seem a bit too good to be true in my view, and strange that market makers and other ultra low-latency HFT firms haven't already taken advantage of it. I hope I'm wrong though! Congrats again and all the best! Joakim @Joakim Arvidsson (Cream Mongoose) I'm 100% sure that I've not got a negative value for commission and slippage, as I said, I'm using the default for commission,and FixedBasisPointsSlippage() for the slippage. There is a possibility that there's a bug in the backtesting environment, but it's also working in the Quantopian Live Trader (another one of their environments) so I think it's unlikely that the same error transfers across. With regards to it being a HFT strategy, I wouldn't describe it as that, the majority of the high turnover is caused by me actively have to hedge common factor exposures on a minutely basis. If you go back up to the top, my turnover was minimal, but I was leaking money everywhere from common factors. The turnover increase was my answer to that. I would describe it as a normal Quant strategy which I had to greatly increase the turnover of to hedge it's risks. Of course, it's hard to model market impact with a backtester, so it could be getting unrealistic fills, but even if it is, it's managing to still turn a profit trading only stocks with the highest liquidity, which should in theory allow the algorithm to trade in real life with very little impact. @Quant Trader, I'm curious to find out if you can change your algo's frequency to a daily timeframe instead of intraday and see if the trading logic still holds. If it does, I'll be truly impressed. In my mind, a truly robust trading system will do well in various frequencies/timeframe. Thanks. @James Villa If you go right up to the top, that strategy was on the daily timeframe, still performs, but not as good, primarily because the reason I made it intraday was to hedge out common factor exposure. I've started getting an error though. No JSON object could be decoded What's going on with that? I'm getting these errors on Notebooks I've already run earlier with no problem. @Quant Trader, Is your algo reading a json file from an external source? The above error seem to imply that. It isn't I think the error is on Quantopian's end, it's not something that happened before and it even happens when re-running a notebook which has worked earlier. Even this algorithm throws a 'No JSON object could be decoded' error when I run a Notebook on it's backtest. 6 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month def initialize(context): schedule_function( rebalance, date_rules.every_day(), time_rules.market_open(hours=1), ) def rebalance(context, data): order_target_percent(sid(8554), 1) pass There was a runtime error. Hi Quant Trader, Thanks for the heads up. We've reproduced this error and are investigating further. Thanks, Josh Disclaimer The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. I get this error too when running the 'Contest Criteria Check' notebook: 0% ETA: --:--:--| | ValueErrorTraceback (most recent call last) in () 1 # Replace the string below with your backtest ID. ----> 2 bt = get_backtest('[removed]') /build/src/qexec_repo/qexec/research/api.py in get_backtest(backtest_id) 116 client.get_sqlbacktest(backtest_id), 117 progress_bar, --> 118 backtest_id, 119 ) 120 /build/src/qexec_repo/qexec/research/results.py in from_stream(cls, result_iterator, progress_bar, algo_id) 591 risk_packet = None 592 --> 593 for msg in result_iterator: 594 prefix, payload = msg['prefix'], msg['payload'] 595 /build/src/qexec_repo/qexec/research/web/client.py in get_sqlbacktest(self, backtest_id) 132 with closing(resp): 133 for msg in resp.iter_lines(): --> 134 yield loads(msg) 135 136 def _make_get_live_algo_request(self, live_algo_id): /usr/lib/python2.7/json/init.pyc in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 337 parse_int is None and parse_float is None and 338 parse_constant is None and object_pairs_hook is None and not kw): --> 339 return _default_decoder.decode(s) 340 if cls is None: 341 cls = JSONDecoder /usr/lib/python2.7/json/decoder.pyc in decode(self, s, _w) 362 363 """ --> 364 obj, end = self.raw_decode(s, idx=_w(s, 0).end()) 365 end = _w(s, end).end() 366 if end != len(s): /usr/lib/python2.7/json/decoder.pyc in raw_decode(self, s, idx) 380 obj, end = self.scan_once(s, idx) 381 except StopIteration: --> 382 raise ValueError("No JSON object could be decoded") 383 return obj, end ValueError: No JSON object could be decoded Hi, we've shipped the fix that was causing the get_backtest() call to fail. These notebooks should run smoothly for you now. We've sorry for the inconvenience. Thanks Josh https://www.quantopian.com/posts/any-suggestions-for-isolating-a-specific-common-return Any suggestions? At this point I seem to be getting unbelievably lucky when it comes to my algorithms having some really impressive hidden features (it's not intentional) Two weeks in to live trading, here are the results: 2018-04-12 21:00 pvr: INFO 2018-03-29 to 2018-04-12$10000000 2018-04-12 04:12 US/Pacific
Runtime 9 hr 2.2 min
2018-04-12 21:00 _pvr: INFO QRet 0.97 PvR 23.44 CshLw 9708565 MxLv 0.05 MxRisk 412091 MxShrt -412092
2018-04-12 21:00 _pvr: INFO Profited 96614 on 412091 activated/transacted for PvR of 23.4%
2018-04-12 21:00 _pvr: INFO PvR 2.3445 %/day cagr 0.274 Portfolio value 10096614 PnL 96614
2018-04-11 21:00 pvr: INFO 2018-03-29 to 2018-04-11 $10000000 2018-04-11 04:23 US/Pacific Runtime 8 hr 51.7 min 2018-04-11 21:00 _pvr: INFO QRet 0.85 PvR 20.65 CshLw 9708565 MxLv 0.05 MxRisk 412091 MxShrt -412092 2018-04-11 21:00 _pvr: INFO Profited 85111 on 412091 activated/transacted for PvR of 20.7% 2018-04-11 21:00 _pvr: INFO PvR 2.2948 %/day cagr 0.268 Portfolio value 10085111 PnL 85111 2018-04-10 21:00 pvr: INFO 2018-03-29 to 2018-04-10$10000000 2018-04-10 04:09 US/Pacific
Runtime 9 hr 5.2 min
2018-04-10 21:00 _pvr: INFO QRet 0.76 PvR 18.35 CshLw 9708565 MxLv 0.05 MxRisk 412091 MxShrt -412092
2018-04-10 21:00 _pvr: INFO Profited 75602 on 412091 activated/transacted for PvR of 18.3%
2018-04-10 21:00 _pvr: INFO PvR 2.2932 %/day cagr 0.268 Portfolio value 10075602 PnL 75602
2018-04-09 21:00 pvr: INFO 2018-03-29 to 2018-04-09 $10000000 2018-04-09 04:27 US/Pacific Runtime 8 hr 47.8 min 2018-04-09 21:00 _pvr: INFO QRet 0.64 PvR 15.61 CshLw 9708565 MxLv 0.05 MxRisk 412091 MxShrt -412092 2018-04-09 21:00 _pvr: INFO Profited 64339 on 412091 activated/transacted for PvR of 15.6% 2018-04-09 21:00 _pvr: INFO PvR 2.2304 %/day cagr 0.260 Portfolio value 10064339 PnL 64339 2018-04-06 21:00 pvr: INFO 2018-03-29 to 2018-04-06$10000000 2018-04-06 04:19 US/Pacific
Runtime 8 hr 55.2 min
2018-04-06 21:00 _pvr: INFO QRet 0.55 PvR 13.40 CshLw 9708472 MxLv 0.05 MxRisk 412091 MxShrt -412092
2018-04-06 21:00 _pvr: INFO Profited 55209 on 412091 activated/transacted for PvR of 13.4%
2018-04-06 21:00 _pvr: INFO PvR 2.2329 %/day cagr 0.260 Portfolio value 10055209 PnL 55209
2018-04-05 21:00 pvr: INFO 2018-03-29 to 2018-04-05 $10000000 2018-04-05 04:26 US/Pacific Runtime 8 hr 48.0 min 2018-04-05 21:00 _pvr: INFO QRet 0.45 PvR 10.82 CshLw 9712598 MxLv 0.05 MxRisk 412023 MxShrt -412023 2018-04-05 21:00 _pvr: INFO Profited 44562 on 412023 activated/transacted for PvR of 10.8% 2018-04-05 21:00 _pvr: INFO PvR 2.1631 %/day cagr 0.251 Portfolio value 10044562 PnL 44562 2018-04-04 21:00 pvr: INFO 2018-03-29 to 2018-04-04$10000000 2018-04-04 04:23 US/Pacific
Runtime 8 hr 51.3 min
2018-04-04 21:00 _pvr:INFO QRet 0.35 PvR 8.66 CshLw 9712598 MxLv 0.05 MxRisk 402375 MxShrt -402375
2018-04-04 21:00 _pvr:INFO Profited 34839 on 402375 activated/transacted for PvR of 8.7%
2018-04-04 21:00 _pvr:INFO PvR 2.1646 %/day cagr 0.245 Portfolio value 10034839 PnL 34839
2018-04-03 21:00 pvr:INFO 2018-03-29 to 2018-04-03 $10000000 2018-04-03 04:22 US/Pacific Runtime 8 hr 52.9 min 2018-04-03 21:00 _pvr:INFO QRet 0.28 PvR 6.89 CshLw 9712598 MxLv 0.05 MxRisk 402375 MxShrt -402375 2018-04-03 21:00 _pvr:INFO Profited 27710 on 402375 activated/transacted for PvR of 6.9% 2018-04-03 21:00 _pvr:INFO PvR 2.2955 %/day cagr 0.262 Portfolio value 10027710 PnL 27710 2018-04-02 21:00 pvr:INFO 2018-03-29 to 2018-04-02$10000000 2018-04-02 04:30 US/Pacific
Runtime 8 hr 44.2 min
2018-04-02 21:00 _pvr:INFO QRet 0.18 PvR 4.58 CshLw 9695138 MxLv 0.05 MxRisk 401540 MxShrt -401540
2018-04-02 21:00 _pvr:INFO Profited 18382 on 401540 activated/transacted for PvR of 4.6%
2018-04-02 21:00 _pvr:INFO PvR 2.2889 %/day cagr 0.260 Portfolio value 10018382 PnL 18382
2018-03-29 21:00 pvr:INFO 2018-03-29 to 2018-03-29 $10000000 2018-03-29 07:13 US/Pacific Runtime 6 hr 2.0 min 2018-03-29 21:00 _pvr:INFO QRet 0.07 PvR 1.97 CshLw 9744756 MxLv 0.05 MxRisk 380081 MxShrt -380082 2018-03-29 21:00 _pvr:INFO Profited 7488 on 380081 activated/transacted for PvR of 2.0% 2018-03-29 21:00 _pvr:INFO PvR 1.9702 %/day cagr 0.208 Portfolio value 10007488 PnL 7488 2018-03-29 14:58 pvr:INFO 2018-03-29 to 2018-03-29$10000000 2018-03-29 07:13 US/Pacific

Had to dig in to my old notebooks and found an algo that had similar phenomena as yours, huge negative common returns with huge specific returns on a daily timeframe. Can you tell me how you hedge out those negative common returns? I know you had to change your timeframe from daily to intraday but how?

4
Notebook previews are currently unavailable.

I dealt with the problem by greatly increasing the daily turnover, I think it depends on the nature of your alpha source though. My strategy benefited from reduced exposure to market movements and hence reducing the time I held positions for reduced my common return losses, I don't know the nature of your strategy though. Your returns could be tied to common returns however.

The method I used was to call the rebalance function multiple time per day using:

for i in range(1, 300, 2):
schedule_function(place_orders, date_rules.every_day(), time_rules.market_open(minutes=i))
schedule_function(close_orders, date_rules.every_day(), time_rules.market_open(minutes=(i+1)))


The other thing (which I was beginning to experiment with, but I haven't got to work yet) was to create a pipeline which returned the factor exposure for the stocks in the universe, then to attempt to negate the exposures by ensuring the sum of the exposures of the stocks in your portfolio was equal to 0 (or at least those that you want to be 0)

e.g.

defensive_exposure = ConsumerDefensive()
combined_factor = (
-defensive_exposure.zscore()
)

pipe = Pipeline(
columns = {
'combined_factor':combined_factor,
},
)
return pipe


and then running with the output. (I haven't got very far with this yet because I've been quite busy recently). I would assume this is how the Quantopian Risk Model works, but I'm hoping you can make it more effective by running it more often.

I guess we'll understand better once the risk model white paper is published, but my impression is that the risk model constraint (and beta constraint) relies on trailing indicators, and assumes out-of-sample, there will be stability (but of course there isn't). It works well for sectors, since stocks don't jump from sector to sector very much, but for style risks, it may work at a gross level, but there is no guarantee for a given slice of the universe and a specific algo that it will project forward correctly, based on trailing data. It is better than nothing, though, I suppose.

Thanks for your reply. I think I want to keep my daily timeframe as I am not confident that intraday backtesting framework of Q would give a realistic outcome as @Joakim Arvidsson pointed out. Maybe I'll try your experiment with negating factor exposures. Also, do you use MaximizeAlpha or TargetWeights construct for optimization?

I've looked into this a bit more and I think I've worked it out:
Quantopian defines the momentum factors as:

The difference in return between assets on an upswing and a downswing over 11 months.

They do the same for all the other factors. What I am assuming is that they have created an algorithm for each factor and then calculate the correlation of your algorithm to the 'factor algorithms' to generate your factor exposure.

I am then assuming that your sector exposure is the stocks from each sector that you hold as a percentage of your total holdings.

They define common returns as:

Returns that are attributable to common risk factors. There are 11 sector and 5 style risk factors that make up these returns.

So what I am guessing is, it's:

Momentum Factor Algorithm = MF
Value Factor Algorithm = VF
Size... = SF
Volatility = VOF
Short-Term-Reversal = STF

Daily Common Return = ß(Momentum)*MF + ß(Size)*SF + etc...
and Common Returns is just a summation of this calculation over the time period you ran the algorithm for.

What I'm pretty sure specific returns are are stocks which generate returns above the average basket. As (I believe) they've created these algorithms based on the difference between two baskets of stocks, those which (using momentum as an example) are on an upswing and those that are on a downswing. As you're using a basket you're going to be getting an average difference. However, if you were to take the difference between the stock which is the most 'upswingy' and the stock which is the most 'downswingy' I think the difference in performance between this pairing (or any other non-basket pairing) and the 'momentum factor algorithm' would be calculated as Specific Returns.

This is just me trying working it out based on what Quantopian has given me, of course I'm still not sure.

I assume this is what the Quantopian Risk Model does. I would appreciate someone from Quantopian weighing in though as I'm still fairly uncertain as to how it works.

@QT,

I have been experimenting with negating factor exposures and its combination to get optimal risk/return of my alpha logic. I think we are on the same page with regards to how Q is modeling the various risk components to form their common returns. So, in theory, anything that is not accounted for by way of correlation to the various common returns routine is specific returns. Still trying different weighing schemes but slowly getting there.

@James Villa,

I've spent a bit of time writing an algorithm which attempts to eradicate the specific returns. I've done this using my best interpretation of how the Quantopian risk model works.

It's not perfect, but it works in my backtests, the problems are caused by me:
a) Not knowing the size of the basket Quantopian uses for calculating factor returns
b) The stocks in the basket have their own sector exposures which I haven't accounted for because it would end up being an infinite feedback loop if I kept on doing that

18
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.data import builtin
from quantopian.pipeline import CustomFactor
from quantopian.pipeline.data.builtin import USEquityPricing
import numpy as np
from quantopian.pipeline.experimental import Volatility, Momentum, ShortTermReversal, Size, Value, BasicMaterials, ConsumerCyclical, FinancialServices, RealEstate, ConsumerDefensive, HealthCare, Utilities, CommunicationServices, Energy, Industrials, Technology
import pandas as pd
from quantopian.pipeline.factors import SimpleMovingAverage, AnnualizedVolatility

def initialize(context):

context.leverage = 1.0

schedule_function(record_vars,
date_rules.every_day(),
time_rules.market_close(hours=1))

schedule_function(rebalance,
date_rules.every_day(),
time_rules.market_open(hours=1))

attach_pipeline(factor_pipeline_long(), 'factor_pipeline_long')
attach_pipeline(factor_pipeline_shorts(), 'factor_pipeline_shorts')
attach_pipeline(stock_pipeline(), 'stock_pipeline')

def stock_pipeline():

#Momentum

sma_high = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30)

Momentum_Ratio = USEquityPricing.close.latest/sma_high

#Value

pb_ratio = morningstar.valuation_ratios.pb_ratio.latest

#Short Term Reversal

sma_low = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=3)

Reversal_Ratio = USEquityPricing.close.latest/sma_low

#Volatility

volatility = AnnualizedVolatility(inputs=[USEquityPricing.close], window_length=30)

#Size

market_cap = morningstar.valuation.market_cap.latest

return Pipeline(
columns = {
'momentum_longs': momentum_longs,
'momentum_shorts': momentum_shorts,
'value_longs': value_longs,
'value_shorts': value_shorts,
'reversal_longs': reversal_longs,
'reversal_shorts': reversal_shorts,
'volatility_longs': volatility_longs,
'volatility_shorts': volatility_shorts,
'size_longs': size_longs,
'size_shorts': size_shorts,
},
)

def factor_pipeline_long():

filter_requirement = morningstar.valuation_ratios.fcf_yield.latest
working_capital = morningstar.balance_sheet.working_capital.latest
market_cap = morningstar.balance_sheet.working_capital.latest
pe_ratio = morningstar.valuation_ratios.pb_ratio.latest

filter_ = filter_requirement*(working_capital/market_cap)

filter_1 = (pe_ratio > 0) & (pe_ratio < 5) & (market_cap > 2e7)

longs = filter_.percentile_between(80, 100, mask = filter_1)

return Pipeline(
columns = {
'VOL': Volatility(),
'MOM': Momentum(),
'STR': ShortTermReversal(),
'SZE': Size(),
'VLE': Value(),
'BM': BasicMaterials(),
'CC': ConsumerCyclical(),
'FS':  FinancialServices(),
'RE': RealEstate(),
'CD': ConsumerDefensive(),
'HC': HealthCare(),
'U': Utilities(),
'CS': CommunicationServices(),
'E': Energy(),
'I': Industrials(),
'T': Technology()
},
)

return Pipeline()

def factor_pipeline_shorts():

filter_requirement = morningstar.valuation_ratios.fcf_yield.latest
working_capital = morningstar.balance_sheet.working_capital.latest
market_cap = morningstar.balance_sheet.working_capital.latest
pe_ratio = morningstar.valuation_ratios.pb_ratio.latest

filter_ = filter_requirement*(working_capital/market_cap)

filter_1 = (pe_ratio > 0) & (pe_ratio < 5) & (market_cap > 2e7)

shorts = filter_.percentile_between(1, 20, mask = filter_1)

return Pipeline(
columns = {
'VOL': Volatility(),
'MOM': Momentum(),
'STR': ShortTermReversal(),
'SZE': Size(),
'VLE': Value(),
'BM': BasicMaterials(),
'CC': ConsumerCyclical(),
'FS':  FinancialServices(),
'RE': RealEstate(),
'CD': ConsumerDefensive(),
'HC': HealthCare(),
'U': Utilities(),
'CS': CommunicationServices(),
'E': Energy(),
'I': Industrials(),
'T': Technology()
},
)

return Pipeline()

context.output = pipeline_output('stock_pipeline')
context.momentum_longs = context.output[context.output['momentum_longs']].index
context.momentum_shorts = context.output[context.output['momentum_shorts']].index
context.value_longs = context.output[context.output['value_longs']].index
context.value_shorts = context.output[context.output['value_shorts']].index
context.reversal_longs = context.output[context.output['reversal_longs']].index
context.reversal_shorts = context.output[context.output['reversal_shorts']].index
context.volatility_longs = context.output[context.output['volatility_longs']].index
context.volatility_shorts = context.output[context.output['volatility_shorts']].index
context.size_longs = context.output[context.output['size_longs']].index
context.size_shorts = context.output[context.output['size_shorts']].index

context.outputs = pipeline_output('factor_pipeline_long')
context.VOL_Long = context.outputs['VOL']
context.MOM_Long = context.outputs['MOM']
context.STR_Long = context.outputs['STR']
context.SZE_Long = context.outputs['SZE']
context.VLE_Long = context.outputs['VLE']
context.BM_Long = context.outputs['BM']
context.CC_Long = context.outputs['CC']
context.FS_Long = context.outputs['FS']
context.RE_Long = context.outputs['RE']
context.CD_Long = context.outputs['CD']
context.HC_Long = context.outputs['HC']
context.U_Long = context.outputs['U']
context.CS_Long = context.outputs['CS']
context.E_Long = context.outputs['E']
context.I_Long = context.outputs['I']
context.T_Long = context.outputs['T']

context.outputz = pipeline_output('factor_pipeline_shorts')
context.VOL_Short = context.outputz['VOL']
context.MOM_Short = context.outputz['MOM']
context.STR_Short = context.outputz['STR']
context.SZE_Short = context.outputz['SZE']
context.VLE_Short = context.outputz['VLE']
context.BM_Short = context.outputz['BM']
context.CC_Short = context.outputz['CC']
context.FS_Short = context.outputz['FS']
context.RE_Short = context.outputz['RE']
context.CD_Short = context.outputz['CD']
context.HC_Short = context.outputz['HC']
context.U_Short = context.outputz['U']
context.CS_Short = context.outputz['CS']
context.E_Short = context.outputz['E']
context.I_Short = context.outputz['I']
context.T_Short = context.outputz['T']

context.VOL_Short = context.VOL_Short.dropna()
context.MOM_Short = context.MOM_Short.dropna()
context.STR_Short = context.STR_Short.dropna()
context.SZE_Short = context.SZE_Short.dropna()
context.VLE_Short = context.VLE_Short.dropna()
context.BM_Short = context.BM_Short.dropna()
context.CC_Short = context.CC_Short.dropna()
context.FS_Short = context.FS_Short.dropna()
context.RE_Short = context.RE_Short.dropna()
context.CD_Short = context.CD_Short.dropna()
context.HC_Short = context.HC_Short.dropna()
context.U_Short = context.U_Short.dropna()
context.CS_Short = context.CS_Short.dropna()
context.E_Short = context.E_Short.dropna()
context.I_Short = context.I_Short.dropna()
context.T_Short = context.T_Short.dropna()

context.VOL_Long = context.VOL_Long.dropna()
context.MOM_Long = context.MOM_Long.dropna()
context.STR_Long = context.STR_Long.dropna()
context.SZE_Long = context.SZE_Long.dropna()
context.VLE_Long = context.VLE_Long.dropna()
context.BM_Long = context.BM_Long.dropna()
context.CC_Long = context.CC_Long.dropna()
context.FS_Long = context.FS_Long.dropna()
context.RE_Long = context.RE_Long.dropna()
context.CD_Long = context.CD_Long.dropna()
context.HC_Long = context.HC_Long.dropna()
context.U_Long = context.U_Long.dropna()
context.CS_Long = context.CS_Long.dropna()
context.E_Long = context.E_Long.dropna()
context.I_Long = context.I_Long.dropna()
context.T_Long = context.T_Long.dropna()

context.VOL = sum(context.VOL_Long) - sum(context.VOL_Short)
context.VOL_LEN = len(context.VOL_Long) + len(context.VOL_Short)
context.MOM = sum(context.MOM_Long) - sum(context.MOM_Short)
context.MOM_LEN = len(context.MOM_Long) + len(context.MOM_Short)
context.STR = sum(context.STR_Long) - sum(context.STR_Short)
context.STR_LEN = len(context.STR_Long) + len(context.STR_Short)
context.SZE = sum(context.SZE_Long) - sum(context.SZE_Short)
context.SZE_LEN = len(context.SZE_Long) + len(context.SZE_Short)
context.VLE = sum(context.VLE_Long) - sum(context.VLE_Short)
context.VLE_LEN = len(context.VLE_Long) + len(context.VLE_Short)

context.BM = sum(context.BM_Long) - sum(context.BM_Short)
context.BM_LEN = len(context.BM_Long) + len(context.BM_Short)
context.CC = sum(context.CC_Long) - sum(context.CC_Short)
context.CC_LEN = len(context.CC_Long) + len(context.CC_Short)
context.FS = sum(context.FS_Long) - sum(context.FS_Short)
context.FS_LEN = len(context.FS_Long) + len(context.FS_Short)
context.RE = sum(context.RE_Long) - sum(context.RE_Short)
context.RE_LEN = len(context.RE_Long) + len(context.RE_Short)
context.CD = sum(context.CD_Long) - sum(context.CD_Short)
context.CD_LEN = len(context.CD_Long) + len(context.CD_Short)
context.HC = sum(context.HC_Long) - sum(context.HC_Short)
context.HC_LEN = len(context.HC_Long) + len(context.HC_Short)
context.U = sum(context.U_Long) - sum(context.U_Short)
context.U_LEN = len(context.U_Long) + len(context.U_Short)
context.CS = sum(context.CS_Long) - sum(context.CS_Short)
context.CS_LEN = len(context.CS_Long) + len(context.CS_Short)
context.E = sum(context.E_Long) - sum(context.E_Short)
context.E_LEN = len(context.E_Long) + len(context.E_Short)
context.I = sum(context.I_Long) - sum(context.I_Short)
context.I_LEN = len(context.I_Long) + len(context.I_Short)
context.T = sum(context.T_Long) - sum(context.T_Short)
context.T_LEN = len(context.T_Long) + len(context.T_Short)

context.vol_exposure = (context.VOL)/(context.VOL_LEN)
context.mom_exposure = (context.MOM)/(context.MOM_LEN)
context.str_exposure = (context.STR)/(context.STR_LEN)
context.sze_exposure = (context.SZE)/(context.SZE_LEN)
context.vle_exposure = (context.VLE)/(context.VLE_LEN)

context.bm_exposure = (context.BM)/(context.BM_LEN)
context.cc_exposure = (context.CC)/(context.CC_LEN)
context.fs_exposure = (context.FS)/(context.FS_LEN)
context.re_exposure = (context.RE)/(context.RE_LEN)
context.cd_exposure = (context.CD)/(context.CD_LEN)
context.hc_exposure = (context.HC)/(context.HC_LEN)
context.u_exposure = (context.U)/(context.U_LEN)
context.cs_exposure = (context.CS)/(context.CS_LEN)
context.e_exposure = (context.E)/(context.E_LEN)
context.i_exposure = (context.I)/(context.I_LEN)
context.t_exposure = (context.T)/(context.T_LEN)

total_sector = abs(context.bm_exposure)+abs(context.cc_exposure)+abs(context.fs_exposure)+abs(context.re_exposure)+abs(context.cd_exposure)+abs(context.hc_exposure)+abs(context.u_exposure)+abs(context.cs_exposure)+abs(context.e_exposure)+abs(context.i_exposure)+abs(context.t_exposure)

total_factor = abs(context.vol_exposure)+abs(context.mom_exposure)+abs(context.str_exposure)+abs(context.sze_exposure)+abs(context.vle_exposure)

context.vol_weight = context.vol_exposure/total_factor
context.mom_weight = context.mom_exposure/total_factor
context.str_weight = context.str_exposure/total_factor
context.sze_weight = context.sze_exposure/total_factor
context.vle_weight = context.vle_exposure/total_factor

context.bm_weight = context.bm_exposure/total_sector
context.cc_weight = context.cc_exposure/total_sector
context.fs_weight = context.fs_exposure/total_sector
context.re_weight = context.re_exposure/total_sector
context.cd_weight = context.cd_exposure/total_sector
context.hc_weight = context.hc_exposure/total_sector
context.u_weight = context.u_exposure/total_sector
context.cs_exposure = context.cs_exposure/total_sector
context.e_exposure = context.e_exposure/total_sector
context.i_exposure = context.i_exposure/total_sector
context.t_exposure = context.t_exposure/total_sector

def record_vars(context, data):

record(leverage = context.account.leverage)

def rebalance(context, data):

etf_stocks = [sid(19654), sid(19662), sid(19656), sid(26669), sid(45719), sid(19661), sid(19660), sid(26670), sid(19655), sid(19657), sid(19658)]

order_target_percent(sid(19654), -0.5*context.bm_weight)
order_target_percent(sid(19662), -0.5*context.cc_weight)
order_target_percent(sid(19656), -0.5*context.fs_weight)
order_target_percent(sid(26669), -0.5*context.re_weight)
order_target_percent(sid(45719), -0.5*context.cd_weight)
order_target_percent(sid(19661), -0.5*context.hc_weight)
order_target_percent(sid(19660), -0.5*context.u_weight)
order_target_percent(sid(26670), -0.5*context.cs_exposure)
order_target_percent(sid(19655), -0.5*context.e_exposure)
order_target_percent(sid(19657), -0.5*context.i_exposure)
order_target_percent(sid(19658), -0.5*context.t_exposure)

momentum_longs = context.momentum_longs
momentum_shorts = context.momentum_shorts
value_longs = context.value_longs
value_shorts = context.value_shorts
reversal_longs = context.reversal_longs
reversal_shorts = context.reversal_shorts
volatility_longs = context.volatility_longs
volatility_shorts = context.volatility_shorts
size_longs = context.size_longs
size_shorts = context.size_shorts

vol_weight = context.vol_weight
mom_weight = context.mom_weight
str_weight = context.str_weight
sze_weight = context.sze_weight
vle_weight = context.vle_weight

total_weight = 0.0

for stock in context.output.index:

weight = 0.0

if stock in momentum_longs:
weight = weight + mom_weight/(2*(len(momentum_longs)+len(momentum_shorts)))
if stock in momentum_shorts:
weight = weight - mom_weight/(2*(len(momentum_longs)+len(momentum_shorts)))

if stock in value_longs:
weight = weight + vle_weight/(2*(len(value_longs)+len(value_shorts)))
if stock in value_shorts:
weight = weight - vle_weight/(2*(len(value_longs)+len(value_shorts)))
if stock in reversal_longs:
weight = weight + str_weight/(2*(len(reversal_longs)+len(reversal_shorts)))
if stock in reversal_shorts:
weight = weight - str_weight/(2*(len(reversal_longs)+len(reversal_shorts)))

if stock in volatility_longs:
weight = weight + vol_weight/(2*(len(volatility_longs)+len(volatility_shorts)))
if stock in volatility_shorts:
weight = weight - vol_weight/(2*(len(volatility_longs)+len(volatility_shorts)))

if stock in size_longs:
weight = weight + sze_weight/(2*(len(size_longs)+len(size_shorts)))
if stock in size_shorts:
weight = weight - sze_weight/(2*(len(size_longs)+len(size_shorts)))

total_weight = total_weight + abs(weight)

for stock in context.portfolio.positions:
if stock not in context.output.index and stock not in etf_stocks:
order_target_percent(stock, 0)

multiplier = 0.5/total_weight

for stock in context.output.index:

weight = 0.0

if stock in momentum_longs:
weight = weight + mom_weight/(2*(len(momentum_longs)+len(momentum_shorts)))
if stock in momentum_shorts:
weight = weight - mom_weight/(2*(len(momentum_longs)+len(momentum_shorts)))

if stock in value_longs:
weight = weight + vle_weight/(2*(len(value_longs)+len(value_shorts)))
if stock in value_shorts:
weight = weight - vle_weight/(2*(len(value_longs)+len(value_shorts)))
if stock in reversal_longs:
weight = weight + str_weight/(2*(len(reversal_longs)+len(reversal_shorts)))
if stock in reversal_shorts:
weight = weight - str_weight/(2*(len(reversal_longs)+len(reversal_shorts)))

if stock in volatility_longs:
weight = weight + vol_weight/(2*(len(volatility_longs)+len(volatility_shorts)))
if stock in volatility_shorts:
weight = weight - vol_weight/(2*(len(volatility_longs)+len(volatility_shorts)))

if stock in size_longs:
weight = weight + sze_weight/(2*(len(size_longs)+len(size_shorts)))
if stock in size_shorts:
weight = weight - sze_weight/(2*(len(size_longs)+len(size_shorts)))

weight = -weight*multiplier

order_target_percent(stock, weight)
There was a runtime error.

@QT,

I've spent a bit of time writing an algorithm which attempts to eradicate the specific returns.

Wouldn't you want to do the opposite and eradicate the common returns? Maybe just a typo on your part. What I think Q is looking for are specific returns attributable to alpha factors that are not attributable to common risks factors.

If you can isolate only the common returns (the returns which Quantopian defines) you can then go long your underlying strategy and short this strategy and the performance is the isolated specific returns.

Fine, if it works for you. I just see it the other way.

Quantopian defines specific returns as the difference between the strategy returns and the common returns. The only way you can isolate specific returns is by eliminating the common returns (making the common returns curve flat).

For example,

Common Returns Unconstrained

Common Returns Constrained

It would be better if the algorithm was good, but the constraint (when applied) does significantly impact common returns. Would be more effective if I had more knowledge regarding what Quantopian used to model the factors.

@QT,

The only way you can isolate specific returns is by eliminating the common returns (making the common returns curve flat).

Isn't this exactly what I said, "Wouldn't you want to do the opposite and eradicate the common returns?"

Yes, but to do that you first need to find a way to isolate the common returns, which is what I tried to do in the algorithm above

You might as well wait till they release the whitepaper on Optimize API with risk loading constraints because you are assuming their computations of common returns based on their narrative which might not reveal everything. See this discussion on this thread ...new-tool-for-quants-the-quantopian-risk-model

@Quant Trader. Very nice. I have to wonder though, why would you settle for less than 10% annual returns when you have seen 12164.1% annual return with a 22.83 sharpe and only 1.5% drawdown? Was there something fundamentally wrong with the high-performance version? I'ld go for it more and try and fix the problems without sacrificing all the returns. That's just me though. Good job anyways, that's the highest sharpe I can remember seeing.

I didn't read the entire thread, but it seems built on a misunderstanding (or at least opposite understand from the one I have).

Returns = Common returns + specific returns

Common returns are things like beta, big minus small, momentum, etc. -- all the "common" risk factors. Therefore, specific returns are whatever is not accounted for by Quantopian's common risk factor models.

Therefore, negative common returns doesn't mean that common returns is subtracting from your performance. It means that your returns go beyond simply no correlation to common returns but are rather inversely correlated to the common risk factor models, which is what you would expect if your returns come from a source unique to beta, big minus small, momentum, etc.

Correct me if I'm wrong.