Back to Community
Intraday algorithm US equities 4.5 Sharpe

With Quantopian's new low commission model intraday trading algorithms are profitable.

Here is an algorithm with 10,000,000 dollars traded everyday. Trades are opened in the morning and closed in the evening with no overnight positions held in the portfolio.

Feedback welcome.

Loading notebook preview...
Notebook previews are currently unavailable.
55 responses

Thanks for sharing your strategy! Reviewing the tearsheet, it's well on the way towards a strategy that may receive an allocation.

Looking at the risk and performance:

  • The in-sample sharpe is positive and high, it will be interesting to see how this continues to track out of sample
  • Beta exposure is neutral, with the returns being driven by the alpha signal
  • Fama french factor exposure is low, suggesting a low exposure to those three common factors
  • Portfolio is cross-sectional, with a diversified number of holdings from the Q1500 tradable universe
  • The securities have low position concentration, diversifying the risk across many positions, and not taking concentrated bets in any single ticker
  • Algorithm is sector neutral, removing the sector exposure risk
  • Algorithm is dollar neutral, removing the market exposure risk
  • Algorithm is unleveraged, showing the raw returns of the strategy
  • Algorithm avoids trading directly at the market open, when the spreads tend to be very high driving higher execution costs. The selected times still have good liquidity with tighter spreads.

Bravo! For the next steps in the strategy development I'd suggest:

  • Analyze the strategy over longer time periods to see how it's performed in different market regimes
  • See how the performance survives once transaction costs are layered onto the strategy
  • Collect out of sample data to track the performance

Good luck,
Alisa

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Alisa,

Thanks for your detailed analysis and positive feedback. At the moment I am using Quantopian's new commission model (0.1 cent per share) and avoid slippage. Am I right in assuming that backtest automatically takes care of the bid ask spread?

Is there a function in pyfolio that will allow me to analyze results as a function of increasing costs? That way, I won't have to run the algorithm multiple times with different cost assumptions. It will also tell me the break-even cost.

Best regards,
Pravin

Can you start the backtest in say 2005 and show us the rolling sharpe ratio?

Hi Alisa,

Could you please elaborate on:

Algorithm avoids trading directly at the market open, when the spreads tend to be very high driving higher execution costs. The selected times still have good liquidity with tighter spreads.

Isn't the backtest transaction cost model independent of the time of day? Or is "execution cost" in this context different from "transaction cost" which would be imposed by the broker?

Also, you suggest that there may be some optimum time to trade, relative to the open and close. Can one sort that out on Quantopian, without the bid-ask data? For the Q1500US as a whole, are there sweet spots after the open and before the close when one would want to trade (for the type of algos you are looking to fund)? Across stocks, how much variation in the optimum trading times exists?

Just curious--how could you tell, just from a tear sheet, what times of day the algo is trading? I suppose you took a peek at the backtest 'exhaust' to see when the trades were executed.

Hi Grant,

If you look at the Transaction Time Distribution, ie. the last graph in the tearsheet, it would appear the trading activities are hours=1 after market opens, and minutes=30 before the market closes. Pravin would have coded these in the scheduled function to start and end trading at the specific time.

Karl

Thanks Karl -

Cool. Is that a new pyfolio feature? I don't recall seeing it before.

On a related note, perhaps it would be a tall task, but a change log of substantive features would be nice (e.g. https://www.quantopian.com/posts/is-there-a-change-slash-feature-log ). Personally, I'd review it from time to time.

Karl it's 5% per annum. NOT daily.

now Pravin, how can we help to improve it?

Hi Peter,

I have been thinking about it and it is very sensitive to slippage. At the moment I am opening positions 1 hour after market open and closing them in the last 30 minutes. Is there a way I could book profits/losses intraday? I mean instead of waiting for last 30 minutes before close? Also do you have any ideas on volume analysis. Current algorithm does not use volume information and I think that can add value.

Best regards,
Pravin

I'm a complete novice in market microstructure and intraday strategies.

But maybe using something like https://www.tradingsetupsreview.com/guide-volume-spread-analysis-vsa/ would help to determine possible exit signals during the day instead of last 30 minutes?

It might also be helpful to check prices throughout the day, and rebalance certain stocks through some risk parity.

Thanks Cheng Peng. That is a good idea. I can look at the return distribution and book profits periodically beyond a threshold while maintaining the balance of the portfolio.

Hi Pravin,

Assuming you have a mechanism (technical signal, price target or other..) to execute at entry/exit points, including Cheng Peng's method, but on the booking of profits/losses to keep track intraday, I'd use a global context.DataFrame to concat local/intraday DataFrame so the global context.DataFrame can be referenced anywhere in the algorithm.

Karl

Great! Happy I could help.

Needless to say, I am thoroughly impressed by your strategy here!

@Karl. Thanks. I see what you are doing there.
@Cheng. I got lucky.

@Aqua Rooster: Wow that looks quite impressive!

It's a bit difficult to provide feedback, as we don't really know what assumptions went into the algorithm. Would you be willing to share some more details?

  1. Ability to short securities
  2. Near zero slippage
  3. Commissions of 0.001 $ per share
  4. 100% fill rate

The key question here is if the algorithm will survive transaction costs. The backtester doesn't account for bid-ask spread, but using a slippage model is meant to account for that. The Quantopian default VolumeSlippage model is a starting point.

You can use the transaction settings that are in the contest:

set_slippage(slippage.VolumeShareSlippage(volume_limit=0.025, price_impact=0.1))  
set_commission(commission.PerShare(cost=0.001, min_trade_cost=0))  

Looking at the universe and daily turnover, the algo holds 400 positions with 100% daily turnover, so it's unlikely to survive in live trading. If you were able to achieve 2bps of slippage, that is 5% cost per year. The annualized alpha is 5%, cutting away the returns.

You could estimate transaction costs using 30 min window around the execution time (the calculation could be scheduled for once a day at market close).
Since at the end of day you also know all the model trades, you could calculate daily "bid/ask expense", and store the cumulative value in a context variable (which you could also then plot alongside the performance). You can assume a constant factor translating 1-min frequency volatility to bid/ask, and run backtest under couple values of that factor

Hi Karen,

Thanks for your excellent suggestion.

Best regards,
Pravin

@Awua/Pravin, Ok, so you're secretive, but why not tell us the general principle at play here -- not the mechanics, but the economic assumption behind your signal? Yes, we see every day at 10:30am you put in $5mil long and $5mil short, and clear out everything 15 minutes before close. But it's hard to suggest anything concrete when we don't even know what we're looking at.

Personally for me I guess the question is how much do you prioritize sharpe over alpha? If it's for me, I want to see some returns and can stomach a few percent volatility no sweat. The Q Open on the other hand penalizes volatility really heavily and doesn't seem to care too much about returns -- so it seems like you're good there if that's your goal. But also, the drawdown periods though not deep are expansive, especially 2016 -- SPY is skyrocketing and your algo is sitting still for like an entire year. That would be hard for me. Why do you think your alpha signal fails there?

I assume you've A/Bed it and holding just during the day works out better than simply rebalancing once a day? Also better than a full rebalance twice a day (hold a different basket of stocks over night)?

Different sectors of the market do slooooowly fluctuate between having a daytime edge and a nighttime edge. Nonetheless, day and night are typically roughly 50/50. By only being in the game during the day, you're missing out on half the action. Can you develop a night time strategy to complement this daytime strategy? Or what about simply at least parking that cash in interest-paying bonds over night? Your algo has been heavy on utilities -- and utilities have tended to make all their money over the backtest period during the day, and have lost money overnight and during the first hour. I've tried to track the difference in momentum of these time-of-day moves to no avail, but if you can crack that cookie you can maybe get more alpha out of this algo and implement a night strategy as well.

As far as using volume to deal with liquidity or slippage issues, you can start with just something simple -- limit all your start of day orders to x percent (probably 2% at most?) of the average daily volume. Allocate the extra cash to the securities with more volume.

@Alisa, I'm curious -- why does Quantopian lack bid/ask spread data in the backtester? I'm a total novice here, so I don't really understand a lot of things with the stock market. Does using limit orders help make the backtest slippage simulation more realistic, or is the lack of bid/ask spread data still going to throw everything off? I guess Q will say it filled when it wouldn't have in cases where the price hits the limit price but not the ask doesn't?

@Viridian - Thanks for your inputs. I cannot disclose the general principle because it will dilute the alpha going forward. As it is anomalies are so difficult to find in an efficient market like US and even if I give a high level overview Quantopians are smart enough to figure it out. I am changing it such that it is no longer intraday but rebalances once every morning and holds positions overnight. Will post the results soon.

You are right that I prioritize sharpe over alpha because I want to reduce drawdowns. Most people cannot stomach drawdowns. I hear from the street that 5% drawdown on unlevered capital and you are out of the game. I will post the results of overnight strategy in an hour or so.

Hello Pravin (aka Aqua Rooster),

The 5% drawdown limit on unlevered capital seems reasonable, but over what time frame? I think one has to be careful here, since running just a 2-year backtest (e.g. as required by the contest) might need to have a tighter limit (and the 6-month contest/fund out-of-sample period would seem to be way too short, especially if one is simply evaluating the algo as a black box without a firm understanding of a potential "edge").

The SR ~ 4.5 should be a clue that the algo is probably unrealistic--my understanding is that long-term, SR ~ 1ish would be more realistic and I'd suspect that the Q backtester is spitting out unrealistic numbers. If you use the settings recommended by Alisa above, what happens?

set_slippage(slippage.VolumeShareSlippage(volume_limit=0.025, price_impact=0.1))  
set_commission(commission.PerShare(cost=0.001, min_trade_cost=0))  

Just curious, did you use the optimize API? If so, perhaps you'd be willing to share your block of code along with the settings? I gather that the Q approach will be to work with authors and if possible, push the combined alpha (portfolio update) through a risk-mitigating function, based on the optimize API, to control various risk factors on an individual algo basis (versus trying to diversify by cobbling together lots of algos).

Hi Grant,
I am not very sure about the time frame of 5% drawdown limit. Will come back to you when I find out more.
Regarding 4.5 SR it is quite possible for intraday algorithms. I have heard of higher SR for high frequency algorithms.
I use 0.001 commissions. But I cannot use 0.025 volume limit slippage because that means it will take 40 minute bars to fill volume in 1 minute bar which to me is unrealistic. Instead I have attached a new backtest where positions are liquidated in the last 30 minutes and have used a slippage of 1 bps. Beyond this algorithm is not profitable. Maybe someone in a HFT firm who have advanced execution algorithms can use this strategy and close positions at VWAP price. It has annual returns of 8% with SR of 2.37 for 1 bps slippage. Also there is a huge loss during the week of 8th December 2014 which I am guessing is a data issue. Needs investigation.

I am using optimize API but in fact have copied the code from one of your algorithms :). Doing nothing out of the ordinary; just basic optimization constraints to comply with Quantopian's risk model.

By the way, I am going to try Karen's suggestion above. I just need to figure out how to get the transaction information from pyfolio, extract daily prices, compute the closing VWAP price and check my new P/L curve.

Best regards,
Pravin

Loading notebook preview...
Notebook previews are currently unavailable.

Well, it would be interesting to hear from Alisa again, because if I'm interpreting correctly, there could be a "mirage" here in terms of what Q simulation (backtesting/paper trading) would indicate, and what would actually happen in the market (i.e. if an algo is "unlikely to survive in live trading" as she says above, then there ought to be a way to configure the algo so that fact would pop out and be obvious).

I would note that Q supports custom slippage models (unless something has changed), so there should be some flexibility in incorporating real-world effects beyond the canned slippage model.

Yes Grant. The "mirage" is because I am not using the default slippage model. I am going to run a backtest with their default slippage model and see if that works. Basically if I cannot enter it into contest there is no point in it.

@Pravin, may I make a suggestion?

Try removing the slippage altogether by going with limit orders. This way by fixing the price, you get no slippage. You will be missing some trades, but overall it might come close to balancing out.

Your current scenario is subject to more slippage than you might think. If you went for the 2.5% of volume rule, you might see your orders spread out over the next few minutes (up to 30 minutes at times).

Nonetheless, the setting proposed by Quantopian puts no minimum commission. I don't think that is realistic.

Quantopian should reveal what will be the actual measures they will use in their tests to give allocation. Otherwise, you might be simply dealing with a “mirage” as you said.

A napkin calculation of just $0.01 slippage on the volume presented in your notebook over a 2-year period totals to a cost in the vicinity of $3.7 million. A $1.00 minimum per trade eats up another $ 200k over the same period. Together, they eat your lunch!

Thanks Guy Fleury. I don't think Quantopian allows limit orders using optimize API. Guess I will have to rollout my own optimizer using cvxpy.

Hi Pravin,

The other thing to consider is I gather that Quantopian has a strong preference now for algos that find alpha in the alternative data sets and futures. So, if this strategy uses OHLCV bars exclusively, it might be o.k. for the contest, but not so hot for the fund. That said, perhaps you could treat it as one of many factors that would be combined in a multi-factor strategy that includes OHLCV-based factors and ones based on alternative data sets. Such a strategy presumably would be acceptable for the fund (but it is not clear then how one would demonstrate that the factors based on alternative data are playing a significant role). And of course only data sets that are free could be used, to be eligible to enter into the contest.

Regarding writing your own code using CVXPY, my understanding is that Q is still planning to open-source the optimization API, so you might get a sense for the timing of that release before you put too much effort in custom code. You should be able to recycle some of it.

Grant

Hi Alisa,

Here is a modified version with following changes:
1. Reduced number of traded stocks by filtering for opportunity.
2. Better clustering/grouping of stocks
3. Better models.
4. Exclude M&A events
4. Filter stocks by earnings calendar

The Sharpe is now 5.6 and alpha is 14%. However it still uses only commissions and ignores slippage. I simply cannot use the default slippage model provided by Quantopian because I wan't to close all positions in last 30 minutes and default slippage model does not allow that. Is there any other way I can test the viability of this algorithm on Quantopian? I believe Dan mentioning that you are revamping your slippage model. Ideally I would like to to close positions using the last 30/60 minute VWAP price + slippage.

Best regards,
Pravin

Loading notebook preview...
Notebook previews are currently unavailable.

I simply cannot use the default slippage model provided by Quantopian because I wan't to close all positions in last 30 minutes and default slippage model does not allow that.

If that's the case then your algorithm is not viable with $10M in capital. That means it's a small time strategy at best or completely worthless in real life trading at worst. Q does not want strategies like this, nor do institutional investors. Assuming you are using 1.0 leverage of $10M with 400 positions You're trying to sell $25,000 worth of each stock in 30 minutes at the end of the day. Not all stocks can tolerate that volume, insisting won't make it so.

@Guy Fleury

Although limit orders may reduce slippage in the sense that they reduce price drag, there is another component that the Quantopian slippage model tries to simulate, which is price impact. In general, we group transaction costs into 3 categories: explicit costs, implicit costs, and missed trade opportunity costs.

Explicit costs deal with things like commission and fees, which would be handled by the set_commission() method. This is an unavoidable cost of trading.

Implicit costs address the impact that wanting to trade has on the market. Any new information to the market signals to investors that they should adjust their prices. Notice that you do not necessarily have to execute an order in order to have an impact on the market! Consider a limit order that is non-marketable (i.e., away from current market price, gets added to the limit order book). Even though the order is awaiting a fill, it is still impacting the market as it is public information. In a scenario like this, the limit order would move the price further away from you without even getting you a fill!

Missed trade opportunity costs are also relevant in this scenario, since giving up urgency means that you're exposing yourself more and more to the chance that the market may move away from you. This introduces a skew in the distribution of your transaction costs, as it is possible that you'd get a lot of fills at slightly favorable prices, however you may also be forced to execute some at highly unfavorable prices, if at all. It's important to notice the danger of stale limit orders in the higher latency context we're in. Imagine you're a buyer with a limit away from the market, you'd only get filled as the stock price falls, unintentionally introducing a style bias into your trading. This opens you up to potentially experiencing severe losses during times when short-term momentum is a dominant force. Furthermore, missed trades are problematic, as your carefully calculated exposures are probably based on the assumption of completion, so you cannot rely on missed shares "balancing out" in the end.

A good starting point when you're thinking about slippage is the slippage vs. cumulative return plot included in the pyfolio tear sheet. Always look at that plot to gauge the sensitivity of your algo to varying levels of transaction costs. Market impact cannot be avoided, but the Quantopian slippage model is a first step in determining whether you algo is properly taking it into account!

@Aqua Rooster, you could address @Luke Izlar 's concern by filtering on liquidity. Basically, only hold positions and trade in securities whose trade size is below certain % of their ADV. This will undoubtedly reduce your universe from what it is now, but it will make your backtest substantially more realistic.

Now I like the look of that returns graph much better. Silky smooth. And that sharpe ratio!

I don't know enough about this stuff to totally understand the slippage issue. Is your argument that Q's default slippage model is unrealistic and that it's totally feasible for you to be able to dump $10 mil worth of stock (something like $100,000 per stock) within 30 minutes every day? Or is it as other people are arguing -- wishful thinking is getting the better of you?

I read somewhere that Q will soon be updating their slippage models to make them more accurate. Maybe that will help when it comes out. But in the meantime for the Q Open I think you have to work with the default slippage model.

What happens if you enable the default slippage model and include an extra scheduled function that runs at the next morning at open to finish off closing off positions that didn't get filled by market close?

Or, what about if instead of closing everything out 30 minutes before close, you instead split your stocks up in various baskets according to their avg. daily volume thresholds so that suuuper liquid stocks you'd try to dump 10 minutes before close, and more illiquid stocks you'd try to dump 2 hours before close, and various fine-tuned degrees inbetween. Obviously there's a lot of day-to-day variance in volume so the method won't be fullproof, but at least it's probably better than a one-size-fits-all approach like you currently have.

It's still eluding me why holding over night has such a detrimental effect on your strategy. I would have thought it would just be extra noise at worst. If there is indeed some consistent pattern of your strategy reverting overnight, that's something you could potentially exploit or hedge against, right?

Did you double-check for possible look-ahead bias?

@Luke - Maybe you are right and I should discard this algorithm. But I want to get some data to see if $100,000 cannot be executed in last 30 minutes on liquid stocks (Q1500) before I do that.

@Karen - I am using Q1500 universe that filters for ADV but probably that is not enough. I will reduce universe as you suggested and try it out. Maybe something will work. I always believed that it is easy to execute 100K+ on liquid stocks during last 30 minutes or in fact even during the closing auction. I will talk to a couple of institutional people to get a better picture.

@Charles - I am using OHLCV data only and no external data outside of Quantopian nor third party feeds except for M&A filter and earnings calendar. I think Quantopian protects from look-ahead bias.

@Viridian - I am generating results by closing as many positions as possible in the last 45 minutes and the remaining on open next day as you suggested. This one uses commissions and default volume slippage and is currently taking forever to run.

@Beha, you are right on all counts.

However, the opportunity costs might be the hardest to determine since it will depend on what the future was or will be, and that is not accessible until after the fact. Will you wait for a ”I should have done....” when you had to make a decision: do you enter the trade or not? And then live with the consequences, whatever they may be.

What is the opportunity cost of what could turn out to be a losing trade? Would you not get again another ”I should have ....”? Putting you in total trade paralysis waiting for the outcome of all those “I should have...” to then again being confronted with: do you enter the trade or not?

No matter what you do, there will be real and tangible costs that can be seen on entering a trade (for instance: commissions). It is so direct that your broker will charge it immediately. We see much less on slippage since in a simulation we don't even see the bid and ask, or what the volume in the book was.

Not having access to that information, we are almost operating blind as to how much the real slippage will be. I see the broker report the commission as soon as you enter a trade. But, I have never seen them report: you had such and such in slippage per share for that trade.

I presented a case in another thread were 189 trades on TMF were spattered all over the trading day. 70 of those trades were for 20+ shares at a time. The rest (119 trades) were for 20 shares or less (due to the 2.5% rule). Some might not want to consider slippage, or commissions, but they do have an impact. And, it is higher than they think.

Trades were occurring about every two minutes, at whatever price there was. Sure, you will get an average price at the end of day. It could be close to the first trade taken, but then again, it might not. From what the trade report gave on that day, trades were executed from $ 23.33 to
$ 24.00 per share. This is more than a penny per share of slippage, the executed range was $ 0.67.

Nonetheless, we need to design trading strategies that will survive these more or less hidden costs. And it is not by hiding them that we will design better systems. It is by designing them even under these adverse trading conditions.

Not having a realistic frictional cost model, as a minimum, should be considered detrimental to anyone's trading strategy since what would be presented would be fluffed and puffed to look much better than it really is.

The equation for a fund is: F(t) = (1+L)∙F(0)∙(1 + r_m + α - fc% - lc% - d%)^t. And the alpha extracted should be enough to cover all frictional costs: α > |fc%| + |lc%| + |d%|. Where fc% is for commissions and slippage. And the alpha should be even greater in order to justify the added work.

It is why, in my strategy designs, I prefer to use the default IB settings. This includes the current Quantopian slippage setting and the minimum cost per trade. A trading strategy should at least cover those costs.

I have a prejudice against trading strategies that consider frictional costs as a major portion of their potential profits. All it tells me is that their strategy is so weak that it can barely exceed its friction costs. And since frictional costs, as a percentage, are low, it can only mean the overall return (CAGR) is low as well. There is a cost to trading, and the more you trade, the more it will cost.

If α < |fc%| + |lc%| + |d%|, then the generated alpha would not be enough to cover expenses or even beat the averages.

Maybe it is another way of saying there is no free lunch, or maybe that your broker or somebody else wants your lunch. They will just take it, without even saying a word since you are freely giving it away.

@Pravin, I like your last notebook.

Before abandoning your strategy design, may I suggest that you take a look at the contest and funding requirements again. They do give you some latitude.

For instance, you last notebook showed a beta of 0.01. But, the contest and funding can live with up to 0.30. Therefore, you could constrain your beta to 0.25 and allowing your program to be slightly biased to the upside. Say 0.55 longs to 0.45 shorts. This will generate a net positive exposure to the market while maintaining leverage requirements.

Your annual volatility was recorded as 0.03. There, you are allowed up to 0.10. Make the constraint 0.08. It should give you the ability to extract more of the alpha by getting more of the momentum. This increasing your APPT.

Also, your drawdown is at -0.02. You are allowed up to -0.10. Make the constraint -0.08 or -0.09. This will allow a greater participation in all the momentum moves from which you could extract some extra points.

Allow your trading strategy to trade more if it has an edge (APPT > 0). Let it do more of what it can do.

There are other moves that could improve performance even higher than what you will find using the above steps.

These moves are intended to increase APPT, your net average profit per trade. And letting the strategy trade more will increase n the number of trades. Both together will increase your overall return to a point where you could not only win the contest, but most importantly get the allocation.

Best of luck.

@Guy Fleury - Thanks. I will try to figure out how to relax my stringent risk constraints.

Here is a backtest with default volume slippage. Unfortunately I can no longer use pyfolio (runs out of memory). It closes approximately 75% of positions on same day and remaining 25% of positions the next morning.

Some pretty long stretches where the algo is flat (including the most recent epoch). Given that the Q fund looks primarily at the most recent 6 months of out-of-sample results, I wonder if this would still be of interest? What if the next 6 months are flat?

@Pravin, from your screen snapshot, I noticed your equity curve is flattening out.

This too should be taken care of.

You need to compensate for return degradation. The easiest way is to make sure that n the number of trades that generated your APPT is not linear but exponential. Even if this is to a small degree.

You have two numbers that matter. First, APPT which will tend to a limit due to the sheer size of n and therefore will tend more and more to a constant as the trading strategy evolves. And then, you have n which is just incrementing one by one. Therefore, Δ(n∙APPT)/Δt will also tend to a constant. Yet, your competition is operating in a CAGR world.

There is no difference between (1+r)^t and (1+r)∙t when t=1. But as t increases, they will spread apart. It is why your curve is flattening out compared to its benchmark. And as said, it is easily correctable.

Presently, I estimate your strategy is made to break down by design. This, in or out of sample.

You have your framework. You have your trade mechanics. This is commendable. The next step is to take care of the return degradation problem.

The side effect of having this compensation in place is that the strategy should not break down out of sample, it should just continue to grow, and this by strategy design.

Then, you should try finding the strategy's limits. If it is by relaxing some of its constraints as suggested, so be it. There is more out there than just a contest you know.

I would ask: What do you think you have in hand, and how far do you think you can push it?

Once you would know your strategy's limits, you could scale back down a bit knowing where they were. Giving you some tactical leeway.

There’s been a lot of great points raised in this thread. Let me take a step back from the transaction costs discussion and parameter optimization and talk about strategy construction as a whole.

First and foremost, it’s important to understand if you have found positive alpha. And the time horizon of your alpha signal. What is driving your return stream?

Typically, the process begins in the research environment, analyzing signals and exploring datasets. Once a custom model is constructed, you can run an Alphalens analyzing your factor exposure and time sensitivities.

Once you have an estimated positive alpha signal, migrate the strategy to the IDE and begin developing the algorithm, selecting the parameters (using optimization if needed in the research environment), and tuning the details. During the coding phase, run a pyfolio tearsheet to understand your risk exposures. How is your beta fluctuating over time? How steep and how long are your drawdowns? Are there specific months the algorithm did well/poorly, why? Is your leverage managed? Is the book dollar neutral throughout time? Are the position concentrations properly managed or were there any runaway positions? What is the average strategy turnover, is this per expectations? As a rule of thumb aim for: beta neutral (zero rolling and average exposure over time), dollar neutral (zero rolling and average exposure over time), max 1x leverage (to view the raw unleveraged returns), sector neutral (to reduce the common factor risk), low position concentration (~max 5%) achieved via a large name diversified portfolio. We shared the common pitfalls in this forum post.

The discussion in this thread has been focused around the contest and the contest submission rules, but I’d suggest the greater prize is an allocation from the portfolio. A winning contest entry has a $5,000 prize whereas a potential multimillion dollar allocation is a greater prize and runway. Dan dives into the details in this thread.

With this in mind, I’d restructure the conversation to think “How can I get an allocation from Quantopian”. Instead of aiming for greater beta exposure or sector exposure to win a contest prize, aim to reduce these common factors. This will increase your chances of getting noticed by Quantopian’s investment team. We are continuously working on improving the contest rules to give guidance to the community. And in this spirit, aim for minimal exposure to common factors: beta, dollar, fama-french, all while trading a liquid universe (the Q1500).

Hope that helps,
Alisa

This is a great thread. I do believe Interactive Brokers provides a report where there is analysis of the arrival price I.e the price at which it receives the order and the price at which it filled, thus also outlining the impact cost. I don't recall the name of the report but I remember having seen it. Maybe this report is of more help

And Praveen, don't give up on the idea yet. Aiming for an allocation is the correct way to think, just as Guy and Alisa have suggested

Alisa -

Regarding better alignment of the contest and the fund, I'd suggest a separate community discussion on the topic. If the idea is to have contest rank represent the probability of getting an allocation then reviewing how you expect to make allocation decisions in 6 months would be a starting point. Then adjustments can be made to the contest accordingly.

But maybe the contest should be more flexible? It could cover the risk that your model for fund-able algos is not ideal (e.g. maybe you are wrong in some assumptions about the fund construction, presumably based on what has worked in the past or what you perceive the market will be in the future).

3.92 Sharpe with commissions and volume slippage.

@Pravin, the image isn't loading for me...

I'm just saying from a software development standpoint sharpe ratio is not gospel so long as returns and metrics are calculated in this way, notice the Sharpe below is also above 3 at 3.08. Alpha at 43 beats yours. Does that mean this strategy is nearly as good as yours? This is not even margin below, the risk is from shorting.

The only way to accurately relate strategy merit or compare versions, iterations is profit for the amount invested, not initial capital. If you change your backtest to start with $1000 and other values you can wind up with any Sharpe you want in this environment. However profit per dollar actually utilized will always be exactly the same.

Clone Algorithm
17
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 59907598d2551b50de3be344
There was a runtime error.

Notice also in the backtest above, the leverage charted once per day, which is what your screenshot indicates, is zero. That's because charting leverage, even if doing record(leverage = context.account.leverage) every minute of every day is still just a once per day snapshot (the last one it sees) because that's how the chart works. So it doesn't tell others what the code is doing. For that, you need https://www.quantopian.com/posts/max-intraday-leverage. And to share what an algorithm is doing, others have to know how much was invested. Initial capital in the chart is like telling mom and dad, yes I'm going to clean my room, it is only an ideal, too often. What actually gets done is what matters, actually invested can be way over or way under for any returns one might like to see. And in the chart above I could change the dates to make it show good returns and say, hey everybody look no leverage, just hover over the chart and you'll see there's never any leverage so you should all send me zero dollars and riches! No. Return per dollar maximum (ever) is the only way an investor can know what matters.

@Blue my mind is blown. I don't understand why the alpha and sharpe are positive. But I do agree sharpe is a weird metric.

@Pravin, I would second the concern others have about how your returns graph starts flattening out in recent years. As returns compound you'd expect it to take off, not taper off. So either you're hitting a capacity barrier or the alpha you've discovered has already been arbitraged out of the marketplace. Hopefully it's the former as that'll be easier to work around. That's great thought that you have it working with default slippage enabled.

The returns curve and the alpha of 43 with sharpe 3.08 were based on an assumption that all of the initial capital was utilized instead of amount actually invested or risked, that's could be part of it. They will be accurate to the degree we risk 100% of initial capital without going over. When it comes to shorting as risk, that's a bit of a grey area for me although the pvr metric in the code above in the chart does count it as risk. So MxRisk shows max in either shorting or long including any margin if applicable and is the basis for the profit-per-dollar it calculates (or profit vs risk, accounting for the acronym). It's best if most people just ignore this logic, it creates an advantage for those who don't brush it aside, until or if I'm ever funded (max) I don't want them catching on very much, just that it's so sad seeing folks led astray I can't resist blabbing about it sometimes.

That is a bug in Quantopian. As per formula of Sharpe ratio, negative returns means negative Sharpe.

In case anyone is interested the intraday algorithm is an adaptation of this paper. Maybe we should all collaborate and make it a community algorithm in the contest. Proceeds will go the the betterment of the community :D

Hi Karl,

Here is the algorithm. I cannot use it because it is not easy to trade 10 million in the morning and close 10 million in the evening. Anyway, I found a better way of doing mean reversion using other techniques.

Best regards,
Pravin

Clone Algorithm
187
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5989bac564d2a359d3e420e6
There was a runtime error.

I cannot use Alphalens for this algorithm. If you notice I neutralize the betas and maximize the alphas. With Alphalens you can only make the portfolio dollar neutral (cannot neutralize the betas). If you trade fewer stocks (instead of 100+ positions) it won't fill 10 million dollars in the last 30 minutes.