Back to Community
long-short algo w/ CVXPY

Here's a long-short algo with CVXPY, perhaps of interest. Nothing too sexy, but it shows that CVXPY can be used.

Note that although the long-term beta is low, there is nothing forcing a market neutral portfolio on a week-by-week basis, so a contest entry might not satisfy the abs(beta) < 0.3 rule.

Clone Algorithm
62
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 585a92a8445c9b6222b5401e
There was a runtime error.
31 responses

Grant, I changed a single number in your trading strategy. Made 3 successive iteration with a slight increase each time. Here are the results:

http://alphapowertrading.com/images/divers/CVXPY.png

The question is: would you take it, meaning that slight increase? Would you accept the higher return with its higher risk? And where the added value would have for origin the change of one number in your program.

Grant, this is very interesting! Two areas near to my heart - optimization and mean reversion.

Question, what are these two constraints trying to accomplish?

cvx.sum_entries(x_tilde*x) >= 1  
and  
 x >= 0.2/m

Grant, if the previous chart was not enough, just in case, and to make a point. By accepting higher drawdowns and higher volatility, one can push a trading strategy's performance higher.

You might not like the trading strategy's behavior, I don't know. But, nonetheless, higher returns are associated with higher risks.

I added two more incremental changes with the following results:

http://alphapowertrading.com/images/divers/CVXPY_plus2.png

From the above chart we can see the progression, higher volatility, higher beta, higher drawdown, but also higher CAGR. Based on the last column, one could say the opportunity cost of a single number change might amount to $ 117 million. Not bad for a digit.

Note, that I did not change any of the code logic, only that one number which step by step was incremented since it has a direct impact on n*u*PT, as in: (1+z)*n*u*PT.

But, as said, some might not like to trade that way. And, it might not be enough for others.

@ Guy - I can't really comment intelligently, since I don't know which "single number" you changed.

@ Dan - I'll post the code with a bunch of comments (probably tomorrow), and explain what I think the principle of the thing is, along with what the optimization constraints are intended to accomplish.

Grant, the point I wanted to make is easy. Once you have a workable trading strategy as the one you provided, you can change a single number and change its mission considerably, even triple its CAGR.

Such a wide range of outcomes, without even changing a single line of code, does say that maybe the structure of the strategy itself might provide added benefits to a trading script.

Just wanted to let you know that it can be done, and with ease.

Here's an update, with comments. Note that with context.use_hedge = False there is no SPY hedging instrument applied. I'll post a version with context.use_hedge = True for comparison.

Hopefully my comments are helpful; if not, just let me know.

Clone Algorithm
25
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 585bde07d65444621d1bf0c9
There was a runtime error.

Here's the version with context.use_hedge = True.

Clone Algorithm
25
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 585bdf8d946ec5630e5a3a76
There was a runtime error.

Another run. Take note:

No hedging instrument.
Commissions & slippage disabled.
Daily trading, instead of weekly.

# control to determine use of hedging instrument  
    context.use_hedge = False  
    # disable slippage & commissions  
    set_slippage(slippage.FixedSlippage(spread=0.00))  
    set_commission(commission.PerShare(cost=0, min_trade_cost=0))

    # get the portfolio weights  
    schedule_function(get_weights, date_rules.every_day(), time_rules.market_open(hours=1))  
    # place orders to adjust positions  
    schedule_function(rebalance, date_rules.every_day(), time_rules.market_open(hours=1))  

100 stocks instead of 50

top_market_cap = market_cap.top(100, mask = (Q1500US() & profitable))  

390 minute COM smoothing, instead of 30 minutes

# get minutely prices & smooth  
    prices = data.history(context.stocks, 'price', 5*390, '1m').dropna(axis=1)  
    context.stocks = list(prices.columns.values)  
    prices = prices.ewm(com=390).mean().as_matrix(context.stocks)  
Clone Algorithm
25
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 585bf006d65444621d1bf2b8
There was a runtime error.

Grant, I'll confess that I didn't completely read (and comprehend) the article you referred to "On-Line Portfolio Selection with Moving Average Reversion", so this may be a naive question...

Why are you optimizing for the minimum distance from the current portfolio? At first glance wouldn't one want to optimize for the maximum profit like below

    objective = cvx.Maximize(cvx.sum_entries(b_t*x)))


@ Dan W.

It seems counter-intuitive. The basic idea is to minimize the "distance" between the old portfolio vector and the new one, subject to constraints. The distance metric is not where the magic happens; it is used, since one wants to make the smallest change possible, while still satisfying the constraints. Loosely, it's sorta like: remodel my house such that it is nearly exactly as before, except that I want it to be worth as much as possible on the market, given that I'm unwilling to borrow money, and we only have so much to allocate to the various improvements. Oh, and by the way, I'd like to try to do something to every room in the house.

Presumably the optimization is pushing to meet the inequality constraint in the maximal way, adjusting x so that cvx.sum_entries(x_tilde*x) is a large as possible. You could try cvx.sum_entries(x_tilde*x) == 1 for comparison.

Rather than cvx.sum_entries(x_tilde*x) == 1 you could try something like:

constraints = [cvx.sum_entries(x) == 1, cvx.sum_entries(x_tilde*x) >= 1, cvx.sum_entries(x_tilde*x) <= 1.005, x >= 0.5/m]  

The equality constraint may be too restrictive. If cvx.sum_entries(x_tilde*x) >= 1 is better, then it would suggest that the optimizer is not stopping at cvx.sum_entries(x_tilde*x) == 1.

Good analogy to the house remodeling. I'll need to think about this. Thanks!

Grant, I continued to make some tests using your trading strategy. This time, I used your other program: the Long Only Mean Reversion with CVXPY. You will find those results in the second panel in the chart below.

The test conditions were the same. Each column used the same numbers as in the first program, and it was the only thing changed for that series of tests. This way, we can compare the two strategies all things considered. Not a single line of code was changed except for the numbers. This enabled comparing strategies as in: is strategy A better than strategy B?

Just by the numbers, I would prefer strategy A. Less volatility, smoother ride, better profits.

However, you made modifications to strategy A which at first glance appeared to not having improved the output even if you had put aside commissions and slippage.

I had to made a test of that too, but under the same conditions, meaning using the same numbers as in the previous tests. This way, each column would have this one number the same for each scenario.

1 CVXPY Tests

http://alphapowertrading.com/images/divers/CVXPY_3tests_w_comms.png

The changes you made to strategy A really improved the overall performance as shown in the third panel. I reinstated the default frictional costs settings in that test in order to be on the same basis as in the first and second panel tests.

The main reason why panel 3 generates more profits is due to your modifications. They increased the number of trade candidates, the number of trade opportunities, and a lot more trades were taken. And since there is a positive edge in the methodology, it generated a lot more profits as in: (1+z)*n*u*PT. All 3 sets of tests had z increasing starting from 0.00.

If I had to choose, I would pick your last modifications (frictional costs included) even if the risks seem higher. I think I will be able to reduce those drawdowns when I will look at the code itself. I am not ready for that yet. One thing is sure, however, with your modifications the strategy trades a lot more. Maybe some of it might not be necessary or productive. But, nonetheless, it makes its money and that is what counts.

Thanks, it was interesting to see the tests pan out. Still being in my learning phase, your programs are helping me with a good example of things that could done with Quantopian.

In my opinion, the better strategy is in panel 3. Even if it did not look like it to start with.

Hello Guy -

Glad that you find the algos I posted useful. I would definitely take the next step and formulate your own understanding and version. I am a total amateur hack, in more domains than I know, including programming and finance. In the grand scheme of things, I doubt that I've done anything novel here.

Grant, I continued to investigate your two CVXPY strategies. Here is some added analysis.

Refer to the chart in the previous post. Column 2 show some portfolio metrics of the original version of your programs. The strategy of interest is the Long - Short Algo (panels 1 and 3). The Long Only - Mean Reversion strategy (panel 2) can be viewed as just part of the also ran like not warranting the added risk for the lesser return than panel 1.

In the third panel version, you increased the number of selectable trade candidates (doubled), and thereby increased the number of trade opportunities. However, these changes did not improve the overall picture. On the contrary, it slightly reduced performance. The move also increased frictional costs. We can't say that the metrics displayed for the original versions in panels 1 and 3 are that different. In panel 3, I used a different random seed number than pi (nice touch btw). but, that could not have had that much of an impact. I also reinstated frictional costs.

Based on performance results alone, I would say that the CVXPY method used is not significantly different than having bought an index equivalent (SPY). Technically, all that work but nothing to show for it. All the generated alpha being eaten up by the frictional costs on thousands and thousands of trades.

I had to find out why the modification I made to your programs increased performance. In my last post I proposed a tentative solution with the equation: A(t) = A(0) + (1+z)∙n∙u∙PT, as I increased z, I would be improving performance. There was some kind of linear relationship in there. After further analysis, it is more like: A(t) = A(0) + (1+z)^t ∙n∙u∙PT.

And since z was the only number that was changed from test to test, I had to conclude that (1+z)^t was the only reason for the performance improvement. The boost in panel 3 compared to panel one was that you increased the number of trade opportunities which could be taken advantage of resulting in an increased number of trades.

I made other tests. I wanted to extract if it was indeed (1+z)^t or simply (1+z)? In the beginning the difference is not that large, it is only with time that the difference will show. Therefore, I made successive tests going back 5, 6, 7, 8, 9 and 10 years. What I wanted to see was if I would get an expanding spread above the benchmark? And it is what I got. Confirming that (1+z)^t was at play.

I was left with WHY? One explanation was that if A(t) = A(0)∙(1+r)^t, was a CAGR representation of the portfolio, where r was the benchmark return, then, z to be accounted for must have had for impact: A(t) = A(0)∙(1+r+z)^t. That is what made the difference. As I increased z, it increased general performance over time, imperceptibly at first, but still at a compounding rate. The more I increased z, as shown by the other columns in the presented chart, the more the performance would increase. It did this for all the tests. I did not test for upper limits, but a napkin calculation showed they were in sight.

The only reason for the improved performance was: (1+z)^t. Evidently, with z=0, you are back to square one.

After having done all that, I reached the conclusion that there was nothing in the original CVXPY methodology as presented. In the sense that it does not perform better than a benchmark, its alpha tends to zero: (α → 0).

With such a conclusion, I might as well find another strategy candidate with a better alpha potential. A strategy with some built in alpha will have for equation: A(t) = A(0)∙(1+r+α)^t, which I can then transform into: A(t) = A(0)∙(1+r+α+z)^t, leading to even higher performance levels since α and z can be expressed in CAGR terms.

So, sorry on that one. I find it is simply not enough to build on.

Thanks Guy -

My main objective was to "kick the tires" on the CVXPY implementation (recently released to Quantopian). Perhaps it'll find some profitable use down the road.

Cheers,

Grant

Here's another version to consider. --Grant

Clone Algorithm
323
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5884a39bb07bf961362be5f6
There was a runtime error.

Here's the tear sheet for the backtest immediately above. --Grant

Loading notebook preview...
Notebook previews are currently unavailable.

Grant,

I was curious to fiddle around with CVXPY for optimization. There is backtested strategy that I really like on the forum (https://www.quantopian.com/posts/trading-the-high-yield-low-volatility-stocks-of-the-s-and-p500) that uses pipeline to pull in the lowest volatility and highest yielding stocks, so I grabbed the pipeline logic from it and used your code for CVXPY to try to have it optimize the portfolio from just the list of highest yielding lowest volatility stocks. I also made it long only.

Do you understand enough about the guts of CVXPY to understand why this backtest would show that the number of securities held consistently shrinks over time? Performance is good, but the number of stocks held gets so low that risk is more concentrated. I can't explain why the number of stocks held would shrink so much. If you have any ideas, can you clue me in?

Thanks for your time!

Clone Algorithm
6
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58b5ae1456dfcc5e2e126491
There was a runtime error.

Joseph,

I made a few changes, and perhaps got your algo to work. Note that it now trades daily, and for testing, I set:

    set_slippage(slippage.FixedSlippage(spread=0.00))  
    set_commission(commission.PerShare(cost=0, min_trade_cost=0))  
Clone Algorithm
5
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58b695e4f568894791d6014a
There was a runtime error.

Awesome Grant! Thank you so much! I will study it and figure out the changes you made so I have a better understanding of it. I plan to play around with the number of stocks - I believe that holding a smaller number of positions (~20 or so) should yield better results.

I appreciate your time,

Joseph

Grant,

Here is a somewhat optimized version (hopefully not too overfit) of the algorithm you helped me with. I put commissions/slippage back in, because my hope is to eventually live trade something similar. I also changed the lookback period (N) to 50 days, added some home-brewed stop-loss code, and reduced the universe of stocks down to the 10 lowest volatility, highest yielding stocks in the Q500 over the past 3 years. I know this increases the risk, but I am personally not a fan of portfolios with too large a basket of stocks, partly because I don't have unlimited capital and want to limit my transaction costs. Curious to get other people's opinions on the drawbacks of using something like this.

DISCLAIMER: I know it isn't what Q is looking for in terms of an allocation, but I feel like something like this would work well for a retirement account.

Clone Algorithm
49
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58b725474c07925e3c3a103a
There was a runtime error.

Intraday leverage hits 1.4, margin.
It made 218k however required 196k to do so. By that measure, 111%. Benchmark 101%.

Blue - are you assuming that it used 1.4 leverage the entire time? I know it isn't allowed in a retirement account, so the timing would have to be sorted out before you could use it to trade your IRA. The target leverage is 1.0, and it appears to stay at 1.0 most of the time, but there may be times that a buy executes before a sell, which I would think could be fixed pretty easily. I'm not certain that I fully understand where you got the 196k number - it starts with 100k, and it certainly doesn't go 1.96 leverage at the outset, so I assume it uses 196k at some point during the backtest (prob at the point it hits 1.4 leverage). Can you help me understand your math?

I ran a larger backtest, and it looks like this strategy didn't outperform until after 2008, so the "hunt for yield" may have something to do with it, and it may not work so well if interest rates rise.

Clone Algorithm
49
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58b739dcefce985e4a7c8004
There was a runtime error.

Hi Blue -

I'd be interested in understanding better, too. Does leverage > 1 as orders are being filled matter? It should be the average gross leverage that is key, right? You seem to be suggesting the the results are invalid by a lot. Really?

I am wondering, is there a way to add the optimization constrain to pick the stocks with the lowest correlations within the group ?

500% returns are possible, just not as written currently, basically. I gave you the more real world default slippage/commissions figures by mistake, sorry about that. So please indulge me as I address that one.

In the past I have been sounding an alarm like results with huge margin are invalid by a lot, since then I think I'm tempering to: We're going to get bitten if we think margin comes for free. And more importantly, we can write a lot higher quality algorithms if we open our eyes to any negative cash, quit turning a blind eye to it. It is a big positive for us to be on that firm footing, knowing. Or, the backtest is going to be way off compared to real money, real world, and that's unnecessary.

When I have brought this up, as I often do, over the last two years, I've been mostly ignored so I'm glad to see you guys at least not just casually brushing it aside.

In that algo with defaults, 96k was borrowed just for part of one day perhaps, or leverage does not always return to 1 by the end of the day and it just isn't visible due to chart smoothing. The leverage high happened around Apr 2012 while the largest dip into negative cash happened Aug 2016. They are both probably due to sells not going through, per the 2.5% default slippage. In the real market those might be executed. If not, then if one does not have a margin account, the buys will be rejected. With a margin account there will be costs not modeled. Can someone model margin costs?

Either way, results would be further away than is desirable in my opinion from what the chart shows, to be at all optimistic about the strategy, as is, or some algos out there that do that sort of thing (big borrowing) could even lead folks down a primrose path to painful losses. Not recommendable. However if you reign in negative cash then it will be a result that can be trusted a lot more. That encapsulates my point of view.

Here's the code with some instrumentation added to the zero slippage/commission version. If you really want to turn the lights on, add track_orders and set 'start' to a date just before any area of interest, otherwise it might overwhelm the logging window or take a long time scrolling down to it.

There are various ways to delay buys until selling is done. I'm confident you'll be happier with the results that way.

Clone Algorithm
10
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58b789585ea96d5e3cd09d33
There was a runtime error.

@ Maxim -

Good question regarding minimizing the correlation. I wonder if clustering would be better suited (e.g. http://scikit-learn.org/stable/auto_examples/applications/plot_stock_market.html ). The ideas is to slice and dice the universe into groups of similar stocks, and then pick one (or more) from each group.

@ Blue -

Thanks. I suppose that the most conservative approach would be to run the backtest, capturing the max leverage ever hit (minute-by-minute). Then the gross leverage could be adjusted downward from 1.0 until the max minute-by-minute leverage is 1.0. This puts a pot of cash on the side that earns no return, and so the overall return is dampened. This way, with the standard Quantopian slippage and commission models, perhaps one would have a baseline of the worst-case; assuming the strategy is sound, it could only get better.

Grant, it's not easy, but it's a lot better/accurate to write code that very strictly controls buying and selling to prevent the leverage spikes than to reduce leverage until they don't go above 1.0. It's been on my mind a lot lately and I've written/copied so much code attempting to deal with it. You have to first accurately model your cash, can't rely on quantopian's cash measurements, then you have to write your own order() function that basically does what order_target_percent() does because you can't rely on quantopians target functions. It's a real pain. I've been meaning to compile and write a post that encompases a "perfect" ordering scheme but I haven't gotten around to it and I'm not exactly a pro.

Out of curiosity, I took the same algorithm and separated the buy and sell orders by 5 minutes (to hopefully make sure that stocks were sold before others were bought). I also recorded the max leverage that Q observed each minute. It did get up to 1.05 at one point, which I can certainly live with. Results aren't materially different, but drawdown does increase quite a bit.

I have another question for those with experience in the matter: I did some searching, and I wasn't able to confirm whether or not IB charges any interest for intra-day margin. The things I read indicated that they calculated it on what you were holding at the close. This link indicates that most brokerages don't/shouldn't charge anything on intra-day - (https://www.elitetrader.com/et/threads/quick-question-on-margin-interest.87811/). Of course risk is increased, but only for a 1-2 minute period while transactions are executing, which I think it something I would be able to live with.

Clone Algorithm
49
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58b83c2eb630995e464f74ff
There was a runtime error.

I just posted:

https://www.quantopian.com/posts/quantopian-and-robinhood-lessons-learned-and-best-practices

This might be a better place to discuss this business of "non-margin" trading.