Is something like this worthy of Q fund?

Here is something I tried. Though there is some selection bias in it. Is something of this return nature worthy of Q fund? Also it can absorb quite a lot of capital.

199
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import statsmodels.api as smapi

def initialize(context):
set_symbol_lookup_date('2015-07-01')
context.XLE = sid(8554)
set_benchmark(context.XLE)
schedule_function(myfunc,date_rule=date_rules.every_day(),time_rule=time_rules.market_close(minutes=30))

def handle_data(context, data):
record(l=context.account.leverage)
pass

def myfunc(context, data):
prices = history(20, "1d", "price")
prices = prices.dropna(axis=1)
prices = prices.drop([context.XLE], axis=1)
ret = prices.pct_change(5).dropna()
ret = np.log1p(ret).values
cumret = ret #np.cumsum(ret, axis=0)
xle = np.mean(cumret, axis=1)

i = 0
score = []
for sid in prices:
diff = np.diff(cumret[:,i])
Y = np.diff(cumret[:,i] - xle)
res = smapi.OLS(Y, X).fit()
if len(res.params) > 1:
score.append(res.params[1])
else:
score.append(0)
i += 1

netscore = np.sum(np.abs(score))

i = 0
wsum = 0
for sid in prices:
try:
val = 500000 * score[i] / netscore
order_target_value(sid,  val)
wsum += val
except:
log.info("exception")
i += 1
continue

i += 1
order_target_value(context.XLE, -wsum)
There was a runtime error.
24 responses

Well since it works only with specific stocks, there is no substance in this algorithm.

I don't remember now Rob. Had this list for over an year now. Picked up stocks that are around 100$I think. I generated a tear sheet. I'm not the fund expert, but a couple things caught my eye that would be problematic. Good: - consistent low beta - has some form of hedging - looks at a basket of stocks Bad: - the leverage just drifts - it's not really being managed. cash increases over time. - the hedge also drifts - it doesn't appear to be actively managed either 11 Loading notebook preview... Disclaimer The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances. Thanks Dan. That brings a interesting question though. I am back-testing for 10 years with fixed investment and keeping all profits in cash. If I re-invest the profits generated then drawdown is inflated. Because a 10K loss on 200K will be reported as 10% drawdown on 100K. Hope I am making sense. Pravin this is an awesome algorithm!!! very cool profile. Was wondering though if you could please provide some more information about the details of the algorithm. Maybe a detailed comments version of the code with specific strategy outline. sincere thanks, Regards, Andrew Hi Pravin, Very impressive results on your algo! As Dan mentioned above, I think the decreasing leverage over time is a minor issues, and in fact it was a simple fix for me to update and re-run the backtest. I only modified a single line of code in a cloned version of your algo (I've attached it here in this response, along with the corresponding tearsheet). The Max Drawdown just goes up a slight amount, from -10% to -12%, and the Sharpe Ratio improves as well. As well, for clarification, Max Drawdown is defined as 'peak-to-trough' drawdown, so using your hypothetical example above of "10K loss on 200K will be reported as 10% drawdown on 100K"; the 10k loss on 200k is actually only 5% drawdown. I think you made a great point that there might be some selection bias in your universe because you mentioned you hand-selected the stocks using the criteria of being over$100 per share. This may unnaturally bias you to select known good performing stocks (since most companies do not IPO at $100, thus if they were up at$100 when you selected them they would have no doubt gone up significantly historically). This is just speculation on my part, but worth investigating further. As a robustness test, perhaps you could use our fundamentals data to dynamically select the top 200 (or the same number as are in your current list) stocks by marketcap (since it seems you are using large-cap stocks) as your universe, and then have your portfolio construction logic applied to this dynamically selected universe and then see how well the performance compares to the existing algo.

Let me know if you have any other questions or thoughts.

Small aside: in the attached backtest, I renamed the 'context.XLE', and 'xle' variables to 'context.HEDGE' and 'hedge' just to make the code more readable since the SPY sid() was being stored off in context.XLE and had me confused for a little while :)

Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import statsmodels.api as smapi

def initialize(context):
set_symbol_lookup_date('2015-07-01')
context.HEDGE = sid(8554)
set_benchmark(context.HEDGE)
schedule_function(myfunc,date_rule=date_rules.every_day(),time_rule=time_rules.market_close(minutes=30))

def handle_data(context, data):
record(l=context.account.leverage)
pass

def myfunc(context, data):
prices = history(20, "1d", "price")
prices = prices.dropna(axis=1)
prices = prices.drop([context.HEDGE], axis=1)
ret = prices.pct_change(5).dropna()
ret = np.log1p(ret).values
cumret = ret #np.cumsum(ret, axis=0)
hedge = np.mean(cumret, axis=1)

i = 0
score = []
for sid in prices:
diff = np.diff(cumret[:,i])
Y = np.diff(cumret[:,i] - hedge)
res = smapi.OLS(Y, X).fit()
if len(res.params) > 1:
score.append(res.params[1])
else:
score.append(0)
i += 1

netscore = np.sum(np.abs(score))

i = 0
wsum = 0
for sid in prices:
try:
# val = 500000 * score[i] / netscore
val = context.portfolio.portfolio_value * score[i] / netscore
order_target_value(sid,  val)
wsum += val
except:
log.info("exception")
i += 1
continue

i += 1
order_target_value(context.HEDGE, -wsum)
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Pravin,
And here is the corresponding tearsheet for the updated backtest directly above.

Best,
Justin

10

.11 beta looks great. On the other hand there are also some practical details to consider. This holds a short position on SPY from half a million to at least 1.5 million (varying at times), a version for not-yet-millionaires could be worth talking about.

More significantly, since transactions substantially exceed starting capital, a Profit vs Risk return calculation comes in lower than indicated in the chart, one may prefer to ignore that only at their wallet's peril when moving to real money. I'm evolving in my view on this and currently think the way to go is to calculate context.portfolio.pnl (now) divided by highest_risk (at any point), where, if shorting is present, shorts are risk in the same sense as buying stocks. Using abs() value of the maximum of either cash spent or shorting, whichever is greater.

A backtest of a copy running on a top-50 universe is, thus far, performing unremarkably.

Attached.

23
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import statsmodels.api as smapi

fundamental_df = get_fundamentals(
query(fundamentals.asset_classification.morningstar_sector_code,
fundamentals.valuation.market_cap,
fundamentals.share_class_reference.symbol,
fundamentals.company_reference.standard_name,
)
.filter(fundamentals.valuation.market_cap != None)
.filter(fundamentals.company_reference.primary_exchange_id != "OTCPK") # no pink sheets
.filter(fundamentals.asset_classification.morningstar_sector_code != None) # require sector
.filter(fundamentals.share_class_reference.security_type == 'ST00000001') # common stock only
.filter(~fundamentals.share_class_reference.symbol.contains('_WI')) # drop when-issued
.filter(fundamentals.share_class_reference.is_primary_share == True) # remove ancillary classes
.order_by(fundamentals.valuation.market_cap.desc()),
).T
context.stocks = fundamental_df[0:50].index
update_universe(context.stocks)

def initialize(context):
context.HEDGE = sid(8554)
set_benchmark(context.HEDGE)
schedule_function(myfunc,date_rule=date_rules.every_day(),time_rule=time_rules.market_close(minutes=30))

def handle_data(context, data):
record(l=context.account.leverage)
pass

def myfunc(context, data):
prices = history(20, "1d", "price")
prices = prices.dropna(axis=1)
prices = prices.drop([context.HEDGE], axis=1)
ret = prices.pct_change(5).dropna()
ret = np.log1p(ret).values
cumret = ret #np.cumsum(ret, axis=0)
hedge = np.mean(cumret, axis=1)

i = 0
score = []
for sid in prices:
diff = np.diff(cumret[:,i])
Y = np.diff(cumret[:,i] - hedge)
res = smapi.OLS(Y, X).fit()
if len(res.params) > 1:
score.append(res.params[1])
else:
score.append(0)
i += 1

netscore = np.sum(np.abs(score))

i = 0
wsum = 0
for sid in prices:
try:
# val = 500000 * score[i] / netscore
val = context.portfolio.portfolio_value * score[i] / netscore
order_target_value(sid,  val)
wsum += val
except:
log.info("exception")
i += 1
continue

i += 1
order_target_value(context.HEDGE, -wsum)
There was a runtime error.

I backtested Justin Lent version of Pravin Bezwada "Worthy of Q fund" in daily mode.
What is wrong with it?

5
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Justin Lent version of Pravin Bezwada "Worthy of Q fund"

import numpy as np
import statsmodels.api as smapi

def initialize(context):
set_symbol_lookup_date('2015-07-01')
context.HEDGE = sid(8554)
set_benchmark(context.HEDGE)
schedule_function(myfunc,date_rule=date_rules.every_day(),time_rule=time_rules.market_close(minutes=30))

def handle_data(context, data):
record(l=context.account.leverage)
pass

def myfunc(context, data):
prices = history(20, "1d", "price")
prices = prices.dropna(axis=1)
prices = prices.drop([context.HEDGE], axis=1)
ret = prices.pct_change(5).dropna()
ret = np.log1p(ret).values
cumret = ret #np.cumsum(ret, axis=0)
hedge = np.mean(cumret, axis=1)

i = 0
score = []
for sid in prices:
diff = np.diff(cumret[:,i])
Y = np.diff(cumret[:,i] - hedge)
res = smapi.OLS(Y, X).fit()
if len(res.params) > 1:
score.append(res.params[1])
else:
score.append(0)
i += 1

netscore = np.sum(np.abs(score))

i = 0
wsum = 0
for sid in prices:
try:
# val = 500000 * score[i] / netscore
val = context.portfolio.portfolio_value * score[i] / netscore
order_target_value(sid,  val)
wsum += val
except:
log.info("exception")
i += 1
continue

i += 1
order_target_value(context.HEDGE, -wsum)
There was a runtime error.

Could it be that because, in daily mode, orders are executed at the next bar, you're placing >1 orders for the same trigger?

So on day 1, you place an order for sid X, which is set to execute at the end of day 2. On day 2, you hit the same trigger for sid X, and place another order for sid X even though day 1's order hasn't yet been filled.

Inserting a very crude check for open orders in myfunc() seems to produce much different results

if get_open_orders():
return


Seong

Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Justin Lent version of Pravin Bezwada "Worthy of Q fund"

import numpy as np
import statsmodels.api as smapi

def initialize(context):
set_symbol_lookup_date('2015-07-01')
context.HEDGE = sid(8554)
set_benchmark(context.HEDGE)
schedule_function(myfunc,date_rule=date_rules.every_day(),time_rule=time_rules.market_close(minutes=30))

def handle_data(context, data):
record(l=context.account.leverage)
pass

def myfunc(context, data):
if get_open_orders():
return
prices = history(20, "1d", "price")
prices = prices.dropna(axis=1)
prices = prices.drop([context.HEDGE], axis=1)
ret = prices.pct_change(5).dropna()
ret = np.log1p(ret).values
cumret = ret #np.cumsum(ret, axis=0)
hedge = np.mean(cumret, axis=1)

i = 0
score = []
for sid in prices:
diff = np.diff(cumret[:,i])
Y = np.diff(cumret[:,i] - hedge)
res = smapi.OLS(Y, X).fit()
if len(res.params) > 1:
score.append(res.params[1])
else:
score.append(0)
i += 1

netscore = np.sum(np.abs(score))

i = 0
wsum = 0
for sid in prices:
try:
# val = 500000 * score[i] / netscore
val = context.portfolio.portfolio_value * score[i] / netscore
order_target_value(sid,  val)
wsum += val
except:
log.info("exception")
i += 1
continue

i += 1
order_target_value(context.HEDGE, -wsum)
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Seong,

Thanx for reminding me that tip but I think Justin Lent should know about that.

It's definitely a frustrating experience when you're switching from daily to minutely mode, totally on board with you there.

Daily and minutely mode are very distinct for their own reasons - and the strategy Justin posted, as are all the strategies entered into the contest - are created for minutely mode, not daily.

But I get where you're coming from and hope tips like if get_open_order() help you address this problem if it comes up in the future.

Hi Seong,
Instead of typing in all the SIDS, could i read them in from a text file? So if I have a bunch of ticker names in a txt file, to be able to read those in directly?

Thanks!

You could use Fetcher to import the symbols and create a dynamically changing universe.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

This algorithm looked quite promising.
Is there a clear reason why simons experiment ends in such unimpressive returns.
and if so is there a clear path on how to improve it.

Many thanks all,
Regards,
Andrew

Hi, having a bit of trouble picking up python syntax. It looks like after the log1p(ret).values line, ret is equal to an array of rows.

ret = np.log1p(ret).values
cumret = ret

What do the symbols in this line mean?
cumret[:,i]

I tried to do this syntax in a python command line, but I guess the data type is not the same as the object in quantopian. I keep getting errors when trying to execute a line with similar syntax. The API documentation does not specify the data type returned by the history() function, which makes it harder to follow some of the code.

Hi Gary,

ret[:,i] or cumret[:,i] means the series of returns for ith stock.

Best regards,
Pravin

Thank Pravin. Really cool algorithm btw. I've been trying to figure out what exactly it's doing. From the code, it looks like you're regressing the differences of the price movements against the differences of the price movements away from the the average. I can't really figure out the logic and / or rational further than that. But in the big picture, it looks like a momentum algorithm where you buy more of a stock if it has been going up. Could you confirm?

Hi Gary,

This algorithm doesn't work with other stock selections and hence I discontinued it. It weights stocks by the volatility of their acceleration and ideally should go both long and short but I couldn't figure out when to go short.

Best regards,
Pravin

Your algo works amazing with stocks that going up. I am sure we can improve it to work just as well when it's going down.

72
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import numpy as np
import statsmodels.api as smapi

def initialize(context):
set_symbol_lookup_date('2015-10-21')
context.XLE = sid(8554)
set_benchmark(context.XLE)
schedule_function(myfunc,date_rule=date_rules.every_day(),time_rule=time_rules.market_close(minutes=30))

def handle_data(context, data):
record(l=context.account.leverage)
pass

def myfunc(context, data):
prices = history(20, "1d", "price")
prices = prices.dropna(axis=1)
prices = prices.drop([context.XLE], axis=1)
ret = prices.pct_change(5).dropna()
ret = np.log1p(ret).values
cumret = ret #np.cumsum(ret, axis=0)
xle = np.mean(cumret, axis=1)

i = 0
score = []
for sid in prices:
diff = np.diff(cumret[:,i])
Y = np.diff(cumret[:,i] - xle)
res = smapi.OLS(Y, X).fit()
if len(res.params) > 1:
score.append(res.params[1])
else:
score.append(0)
i += 1

netscore = np.sum(np.abs(score))

i = 0
wsum = 0
for sid in prices:
try:
val = 500000 * score[i] / netscore
order_target_value(sid,  val)
wsum += val
except:
log.info("exception")
i += 1
continue

i += 1
order_target_value(context.XLE, -wsum)
There was a runtime error.

The code from Sept 25 above showing 330% returns has PvR (Profit versus Risk) return of only 77% due to leverage over 2.
The JW code immediately above, with the line that Seong points out is needed so it won't go haywire, once added, checks in at only 106% in the chart however its PvR is higher ~180% (because it does not spend all of the initial capital).
I think the main difference is simply more restriction to winning stocks in the second, our constant adversary, overfitting, I know it well.