Back to Community
How much money is an algorithm worth?

I have created an algorithm that generates 150,000% in three years. How much money can this algorithm can be sold for?

50 responses

Well, let's see ..... 150,000% = a multiple of 1,500 times over 3 years, you say. So now (just doing a little math) that implies you are multiplying your account capital by 10.45 times each year. OK, now lets say that instead of doing this for 3 years you are patient enough to do it for 10 years. That will give you a return multiple of 10.45^10 or 1.55E+10 times your starting capital. If you started out with with $ 10k (a reasonable amount for a modestly small trading account isn't it?), then you will end up after 10 years with $ 1.5E+14, or about $ 150 trillion. That compares with 2018 figures of US GDP = $ 20.7 trillion, and whole world GDP = $ 87.5 trillion. However, using your trading system, your trading account value will have grown to more than twice that amount. Well, i'm sorry, but it looks like the inhabitants of Planet Earth simply do not have enough to be able to pay you what your system is worth. So, here's an idea. Instead of SELLING it, why not just trade it yourself. Then, in a decade or thereabouts, you will own everything there is on this humble planet anyway.

Please note: the opinions expressed above are solely those of the writer and are not related to those of anyone else that i know at Quantopian. Perhaps you might like to elicit some alternative opinions ....?

How many times over the years have people asked the same sort of questions. Quite a lot I fear.

Why can't people realise how they could start with peanuts and still end up owning the universe if they are that good.

It's always the same story on trading forums.

Would that such algos had any basis in reality. If so I would ask you for a test copy.

@James,

Both Tony and Zeno make good points. I’ll add a few:

Basically these returns are not real. Most likely you are using limit orders to achieve some profit target, and stop loss orders to limit your downside. Possibly only trading low volume stocks. Congratulations, you’ve hacked the backtester and default slippage model. Good luck getting that in the real market.

Even if your returns are real, I’d check that your leverage is not out of control. 10x returns are not very impressive if you’re also using 10x leverage.

Lastly, absolute returns don’t mean much if there’s a good chance that you get wiped out getting there (e.g max drawdown = >100%). It’s all about risk-adjusted returns.

Thank you Joakim, Tony, and Zeno for your honesty. You all make good points about how my algorithm needs improvement.

@James, I didn't really say anything about HOW your algo might be improved and, although @Joakim's suggestions are certainly very plausible ones, we would need more info to be able to comment with certainty on what exactly might help. My point, if you excuse my joke, is that the result you mention is just so wildly "too good to be true" that very clearly something is definitely wrong. Best wishes.

@Tony, I recently just improved my algorithm to 2,000,000% in return, but there are many problems. One of many, is that the algorithm reaches -500,000% in returns at one point. This happens when the Russel 2,000's stock prices goes down. Second, some of the stocks the algorithm are penny stocks. With that being said, my algorithm needs improvement and is not, "too good to be true." I am open to any suggestions to help my algorithm.

@James Gastineau,

Before making any comments or suggestions for improvement on your super algorithm, I'd like to ask a few basic questions:
1) Was this algo done within Quantopian environment and framework?
2) At what frequency are you trading? Tick, minute, hourly, daily, etc.
3) How are you placing your orders? market or limit order with or without take profit and/or stop loss?
4) What universe are you trading on?
5) Are your fantastic results inclusive of transaction costs and slippage? If so, what are they?

These are just mostly questions about your trading set up and can tell where you might be flawed or not. Thanks.

Lastly, if you can prove to me with a one month brokerage statement of real trading that your algo says what it does give or take an allowance of half your claim, I will volunteer to be your slave!

@James, are you being serious or not? That's definitely NOT an "improvement" at all! If your account equity goes negative, then your account is bankrupt, and everything after that is just bullsiht. You need to get rid of the negative account equity problem first, and only AFTER you can do that, then you start to have anything that begins to be meaningful.

So far, I only have backtested the algorithm to Quantopian. I started Quantopian about a three weeks ago and I currently do not have a brokerage account.

@ James Villa,
1) My entire algorithm has been done through Quantopian and Pipeline.
2) My algorithm trades everyday starting 1 hour after market open.
3) The algorithms buying and selling signals are based off of moving averages crossing for each stock. There also is a form of "stop loss" when the Russel 2,000 loses 2%, then algorithm short sells all of the stocks.
4) I am trading on USEquityPricing.
5) I have not put in transaction costs into the algorithm.

@ Tony, I am being serious. I am currently have issues with selling all my stocks when the market is going into a correction. I have used the Russel 2,000 to stop the stocks from trading, but the algorithm still goes to the negatives.

Hum, any chance to see a backtest tearsheet?
Just curious.

@James Gastineau,

I started Quantopian about a three weeks ago and I currently do not have a brokerage account.

Welcome to Quantopian! There is much to learn most specially the Quantopian environment and framework. So fasten your seat belt and be prepared for a bumpy ride!

5) I have not put in transaction costs into the algorithm.

Here's my first suggestion. If you're trading on a daily timeframe add this code on your initialize function:

set_slippage(slippage.FixedBasisPointsSlippage(basis_points=5, volume_limit=0.1))  
set_commission(commission.PerShare(cost=0.001, min_trade_cost=0))  

if you are trading intraday, then add this code instead:
set_slippage(slippage.VolumeShareSlippage(volume_limit=0.025, price_impact = 0.1)) set_commission(commission.PerShare(cost=0.001, min_trade_cost=0))

Also of note, if you are trading intraday, the IDE backtester has a known bug on limit orders as @Joakim alluded to as your discovery to the hack!

First try these and see if your algo passes this initial stress test. Good luck!

@ James Villa, thank you so much for helping me out!

It sounds like he's using massive leverage.

He does raise a fair question, how do you value a trading algorithm? It'll have something to do with Alpha and Sharpe, but I don't know the proper calculation.

Hi James G.,

I can get similar returns in a casino if you let me play long enough with your money. ;)

My point is, absolute returns are really meaningless without some more context (e.g. Risk, Leverage, Slippage, Universe, etc).

4) I am trading on USEquityPricing.

This is not a Universe, it's the OHLCV price feed for all US equity products. Have you defined a Universe? If not, I would recommend starting with QTradableStocksUS() as your base-universe at least (unless you're trading other equity products like ETFs, or Futures).

If you want some more/better feedback, I'd strongly recommend attaching a Tearsheet for the community to review and provide feedback on, as others have suggested. You can do this in the Backtester screen on the Notebook tab, or just run the attached notebook, replacing my backtest ID with your own (you wont' be able to run mine).

Most of us here do this all the time, and in my opinion, there's really no way someone could 'reverse engineer' your 'secret sauce' by reviewing your tearsheet. If you're really worried you could run the option (hide_positions=True) as well. I only do this hoping that it uses less memory on a longer backtest, not because I'm worried that someone could reverse engineer my strategy if they saw my top positions.

Regarding Market or Limit orders, you didn't really answer this question. If you're using Market orders then at least that's not the 'problem'. If you're using Limit orders though I'd be very suspicious if you're really getting realistic fills, especially if you're trading low volume stuff.

Loading notebook preview...
Notebook previews are currently unavailable.

@ Joakim, thank you for the universe advice! The biggest problem with my algorithm is selling the stocks then its price falls. Here is my algorithm.

Clone Algorithm
14
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from zipline.api import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import Returns
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.factors import SimpleMovingAverage
import pandas as pd
import numpy as np
import math 

def initialize(context):
    my_pipe = make_pipeline()
    attach_pipeline(my_pipe, 'my_pipeline')              
    schedule_function(ma_crossover_handling, 
                      date_rules.every_day(), 
                      time_rules.market_open(hours=1))
    set_benchmark(sid(21519))
    
    return my_pipe
def make_pipeline():
    pipe = Pipeline()
    
    SMA = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=200)
    
    year_return_test = Returns(
        window_length=100) *100
    
    m = year_return_test > 60
    
    year_return_test2 = Returns(
        window_length=100,
        mask = m) *100
    
    SMA_above = year_return_test2 > SMA
    
    year_return_test3 = Returns(
        window_length=100,
        mask = SMA_above) *100
    
    m2 = year_return_test3 < 115
    
    year_return = Returns(
        window_length=100,
        mask = m2) *100
    
    sma_10 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=10)
    pass 
    
    sma_7 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=7)
    pass
    
    sma_1 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=1)
    pass

    
    pe_ratio = Fundamentals.pe_ratio.latest
    
    
    peg_ratio = Fundamentals.peg_ratio.latest
    peg_ratio_1 = peg_ratio <= 1

    pct5 = Returns(inputs=[USEquityPricing.close], window_length=2)
    return Pipeline(
        columns={
            'year_return': year_return,
            'pe_ratio': pe_ratio,
            'peg_ratio':peg_ratio_1,
            'sma_1': sma_1,
            'sma_7':sma_7,
            'sma_10':sma_10,
            'pct5':pct5
            
            
        }, screen=peg_ratio_1
    )

def before_trading_start(context, data):
    # Store our pipeline output DataFrame in context
    context.output = pipeline_output('my_pipeline')
def do_daily(context, data):  
    # iterate through the currently held positions  
    for security, position in context.portfolio.positions.items():  
        stock_price = position.last_sale_price  
        stock_basis = position.cost_basis  
        stop_price = stock_basis - (stock_basis * 0.01)

        # Sell at market if the last price is <= the stop price  
        if stock_price <= stop_price:  
            order_target(security,0)  
            print "SOLD STOPLOSS"    
def ma_crossover_handling(context,data):
    context.IWM = sid(21519)
    
    hist = data.history(context.IWM, 'price', 365, '1d')
    
    sma_7_IWM = hist[-7:].mean()
    sma_1_IWM = hist[-1:].mean()
    
    pct__change = hist.pct_change()[-1]
    WEIGHT = 1.0 / len('my_pipeline')
    
    
    
    open_orders = get_open_orders() 
    
    open_rules = 'sma_1 > sma_10'
    open_these = context.output.query(open_rules).index.tolist()
   
    for stock in open_these:
        if stock not in context.portfolio.positions and data.can_trade(stock):
            order_target_percent(stock, WEIGHT, style=StopOrder(.06))

    close_rules = 'sma_1 < sma_10'
    close_these = context.output.query(open_rules).index.tolist()
    
    for stock in open_these:
        if stock not in context.portfolio.positions and data.can_trade(stock):
            order_target_percent(stock, -WEIGHT, style=StopOrder(.06))
            
    for stock in open_these:
        if pct__change < -.02:
            order_target_percent(stock, -WEIGHT)
There was a runtime error.

That's confirmed my theory, he's using unbelievable leverage.

@ Quant Trader, you are correct and that is why I am asking for help.

@James G.

Good on you for sharing the strategy. In the attached, I've defined the universe = QTradableStocksUS and intersected that in your Pipeline screen.

If you want to continue improving this strategy, I would look at limiting Leverage next. Perhaps by using Optimize API? That way you can more easily limit other risks and exposures as well.

Clone Algorithm
6
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from zipline.api import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import Returns
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.factors import SimpleMovingAverage
import pandas as pd
import numpy as np
import math 

def initialize(context):
    my_pipe = make_pipeline()
    attach_pipeline(my_pipe, 'my_pipeline')              
    schedule_function(ma_crossover_handling, 
                      date_rules.every_day(), 
                      time_rules.market_open(hours=1))
    set_benchmark(sid(21519))
    
    return my_pipe
def make_pipeline():
    
    universe = QTradableStocksUS()
    
    pipe = Pipeline()
    
    SMA = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=200)
    
    year_return_test = Returns(
        window_length=100) *100
    
    m = year_return_test > 60
    
    year_return_test2 = Returns(
        window_length=100,
        mask = m) *100
    
    SMA_above = year_return_test2 > SMA
    
    year_return_test3 = Returns(
        window_length=100,
        mask = SMA_above) *100
    
    m2 = year_return_test3 < 115
    
    year_return = Returns(
        window_length=100,
        mask = m2) *100
    
    sma_10 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=10)
    pass 
    
    sma_7 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=7)
    pass
    
    sma_1 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=1)
    pass

    
    pe_ratio = Fundamentals.pe_ratio.latest
    
    
    peg_ratio = Fundamentals.peg_ratio.latest
    peg_ratio_1 = peg_ratio <= 1

    pct5 = Returns(inputs=[USEquityPricing.close], window_length=2)
    return Pipeline(
        columns={
            'year_return': year_return,
            'pe_ratio': pe_ratio,
            'peg_ratio':peg_ratio_1,
            'sma_1': sma_1,
            'sma_7':sma_7,
            'sma_10':sma_10,
            'pct5':pct5
            
            
        }, screen=peg_ratio_1 & universe
    )

def before_trading_start(context, data):
    # Store our pipeline output DataFrame in context
    context.output = pipeline_output('my_pipeline')
def do_daily(context, data):  
    # iterate through the currently held positions  
    for security, position in context.portfolio.positions.items():  
        stock_price = position.last_sale_price  
        stock_basis = position.cost_basis  
        stop_price = stock_basis - (stock_basis * 0.01)

        # Sell at market if the last price is <= the stop price  
        if stock_price <= stop_price:  
            order_target(security,0)  
            print "SOLD STOPLOSS"    
def ma_crossover_handling(context,data):
    context.IWM = sid(21519)
    
    hist = data.history(context.IWM, 'price', 365, '1d')
    
    sma_7_IWM = hist[-7:].mean()
    sma_1_IWM = hist[-1:].mean()
    
    pct__change = hist.pct_change()[-1]
    WEIGHT = 1.0 / len('my_pipeline')
    
    
    
    open_orders = get_open_orders() 
    
    open_rules = 'sma_1 > sma_10'
    open_these = context.output.query(open_rules).index.tolist()
   
    for stock in open_these:
        if stock not in context.portfolio.positions and data.can_trade(stock):
            order_target_percent(stock, WEIGHT, style=StopOrder(.06))

    close_rules = 'sma_1 < sma_10'
    close_these = context.output.query(open_rules).index.tolist()
    
    for stock in open_these:
        if stock not in context.portfolio.positions and data.can_trade(stock):
            order_target_percent(stock, -WEIGHT, style=StopOrder(.06))
            
    for stock in open_these:
        if pct__change < -.02:
            order_target_percent(stock, -WEIGHT)
There was a runtime error.

@Quant Trader, how do you tell that he is using an incredible amount of leverage?

Hi @John, in the Risk Metrics shown for this algo, it looks like Beta = 6.77 (times the SPY), whereas Quantopian is looking for algos with Beta as close to ZERO as possible! Also, would you really want to trade anything with a Sharpe ratio of only 0.16? Effectively that means "paying" for any returns in terms of huge drawdowns. For practical purposes, it doesn't matter how large the "Total Returns" APPEAR to be if the account value drew down to zero, i.e. went bankrupt, while in the process of getting to (actually not ever getting to) that supposed, nominally stated return.

New Backtester Screen > Risk Tab > Leverage

If you want to track and plot leverage in the code, have a look at @Blue Seahawks post on this. If you track it intraday in 'handle_data' it will slow down backtesting quite a bit I reckon. More accurate though than just using record(leverage = context.account.leverage) also in 'handle_data'.

@James Gastineau,

In your code:

    SMA = SimpleMovingAverage(  
        inputs=[USEquityPricing.close],  
                window_length=200)  
    year_return_test = Returns(  
        window_length=100) *100  
    m = year_return_test > 60  
    year_return_test2 = Returns(  
        window_length=100,  
        mask = m) *100  
    SMA_above = year_return_test2 > SMA  

Help me understand the logic, why are you comparing returns to moving average of price which you then use as a filter? It doesn't make sense to me but then again, this could be the discovery of the century!

Hi @QuantTrader,
" ... a fair question, how do you value a trading algorithm? "

I'm guessing a bit here, but i assume that probably amongst some hedge funds they might tend to think more in terms of "buying & selling" the PEOPLE who write good algos rather than buying & selling the algos themselves. On that basis then, perhaps one might ask how the funds decide what to pay as salaries or bonuses? Employees in general always get paid somewhat less than what they can earn for their employer, so presumably the salaries / bonuses or other equivalent rewards that go to good professional algo writers are approximately based on the sum of the "estimated true values" of the algos they write. Maybe one could work backwards from that....

Looking for a somewhat more direct answer, it is generally the case that the "fair value" of any asset is more-or-less equivalent to the sum of the discounted cash flows that it can generate over its useful lifetime, and the "discount rate" is related to the risk-free rate plus an additional "risk premium" component. This is all very much standard stuff for valuing equities (i.e. fundamental values of stocks) , or for resource projects (e.g. oil & gas fields or mining project developments, etc which was originally my own area of expertise), and at least conceptually it should be possible to do the same for intellectual property (IP) such as algos.

The risk-adjusted return on an algo would, i assume, be the amount that it can earn over and above the risk-free rate (e.g. US treasuries) adjusted for the algo's risk of drawdown (maybe, as you say, be based on Sharpe or some other metric). When the algo is leveraged, my guess is that perhaps a better approach would be to generate multiple-scanerio risk-reward profiles using MonteCarlo simulation, and then come up with a probabilistic estimate of the return on the algo over its "useful lifetime". Whatever THAT might be presumably depends a fair bit on the type of algo itself. My guess is that some of the "data-mining" type algos have only very short lives, compared to those of a more fundamental nature which are likely to be more robust over time, as Warren Buffet knows ;-))

It is indeed an interesting question. Anyone have any more specific ideas?

Hi Tony,

it is generally the case that the "fair value" of any asset is more-or-less equivalent to the sum of the discounted cash flows that it can generate over its useful lifetime, and the "discount rate" is related to the risk-free rate plus an additional "risk premium" component.

Ahh, you bring me back to my time as a young M&A specialist in the 80's and indeed DCF was the standard template for valuation. In the context of a written algo, the ultimate due diligence is actual trading results on top of perhaps rigorous stress tests of overfitting, performance on different market regimes, risks mitigation and other factors. This establishes somewhat a proven track record of the author, duration of which is dependent on the buyer's threshold, could be six months, one year or couple of years. This gives the prospective buyer some level of confidence that the algo performs as intended and presented by the author, given the whole realm of possibilities under the stress of uncertainty.

I think that Quantopian's business model / proposition to prospective algo authors is perhaps the best and fairest I've seen. It shares with engaged authors half (10%) of the typical performance fee of a hedge fund manager. Although it would be nice to also get a small share of the 2% management fee...wishful thinking. Q offers us all these without really proving that the algo will do what is intented to do in real time, real money trading. Of course, the caveat is that Q has the option to stop the allocation anytime for whatever reason. The author's only investment is his/her brain and time!

Hi @James, i have no idea whether the original question was intended mostly out of curiosity about valuing IP in general, of which an algo is only one possible example, or whether there was an implicit question about how much the contributors to Q algos "should" receive.

Personally I agree with you about Q's business model/proposition of a % of the profit pie, whatever that turns out to be, over the useful life of the algo, whatever that turns out to be, as something that is indeed very fair & reasonable and reality-based and saves anyone having to calculate a "theoretical value" at all.

I know there are a number of other crowd-sourced funds around, but the only other one that i have bothered to investigate in detain and to participate in is Apiary Fund. Their business model is to teach people to trade highly leveraged FX on a demo account and then, if they are any good, give them a real account, let them trade it (either manually or with an algo, however the author/trader chooses) and the participant then gets to keep the major part of the profit for him/herself. Again, as with Q, there is no risk to the author's/trader's own PERSONAL capital, although if they fail to perform then the funded account gets taken away, whereas here in Q we can, at least as far as i know, have as many tries as we like at algo writing and it doesn't matter if all of them are bad ;-)) and we never have to actually prove ourselves as traders.

Although personally i enjoy both algo writing AND also trading, and i consider that these different skill sets complement and reinforce each other well, nevertheless they are very different. I suspect that some people who write in here at Q do not actually trade at all. That's OK, different courses for different horses, and Q certainly provides a very enjoyable, low stress environment for people who prefer to theorize rather than actually putting anything on the line.

Personally, although i do enjoy "real trading" as well as code writing, i applaud Q's decision to do the actual trading and let the authors just relax. On that basis, i think Q's "10% of profits to the author" model is very fair and obviously much less stressful than, for example, Apiary's model of "very high % payout but you have to trade it yourself". Certainly i have no complaints at all about Q's chosen business model. Cheers, TonyM.

Leverage reached 2.57 million above, a little high.

See if any of the things in this code of yours modified might help toward your goal.

This keeps leverage closer to reasonable. I used a short time-frame to be able to attach more quickly, happened to look pretty good. Longer term it isn't incredible yet is manageable and ought to be interesting for you in making changes.

While making changes I'd like to suggest a tip: At least two tabs open, same URL but they can be slightly different code, and run them both (or all) at the same time, a beautiful thing. If two browser windows you can alt-tab back and forth while they run. Keep the latest best in completed state. If tabs, use ctrl-tab instead. Add shift to go backward. And with the mouse cursor over the chart. That process can streamline development.

Clone Algorithm
4
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''
Try adding https://www.quantopian.com/posts/track-orders
'''

from quantopian.pipeline              import Pipeline
from quantopian.algorithm             import attach_pipeline, pipeline_output
from quantopian.pipeline.data         import Fundamentals
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors      import Returns
from quantopian.pipeline.factors      import SimpleMovingAverage as SMA
from quantopian.pipeline.filters      import QTradableStocksUS

def initialize(context):
    #set_benchmark(sid(21519))
    context.IWM = sid(21519)

    attach_pipeline(make_pipeline(), 'pipe')

    schedule_function(close_some,    date_rules.every_day(), time_rules.market_open())
    schedule_function(ma_crossovers, date_rules.every_day(), time_rules.market_open(hours=1))

    context.mxlv = 0        # maximum leverage for chart
    for i in range(1, 391):
        schedule_function(mxlv, date_rules.every_day(), time_rules.market_open(minutes=i))

def make_pipeline():
    m = QTradableStocksUS()

    sma_0   = SMA(inputs=[USEquityPricing.close], window_length=200)
    returns = Returns(window_length=100, mask=m) * 100
    m &= returns < 115
    m &= returns > 60
    m &= returns > sma_0

    pct5   = Returns(inputs=[USEquityPricing.close], window_length= 2, mask=m)
    sma_10 = SMA    (inputs=[USEquityPricing.close], window_length=10, mask=m)
    sma_7  = SMA    (inputs=[USEquityPricing.close], window_length= 7, mask=m)
    sma_1  = SMA    (inputs=[USEquityPricing.close], window_length= 1, mask=m)

    #pe_ratio  = Fundamentals.pe_ratio.latest
    peg_ratio = Fundamentals.peg_ratio.latest  # all nan, why?
    #m &= peg_ratio <= 1

    return Pipeline(
        screen  = m,
        columns = {
            #'pe_ratio' : pe_ratio,
            'peg_ratio': peg_ratio,
            'returns'  : returns,
            'sma_1'    : sma_1,
            'sma_7'    : sma_7,
            'sma_10'   : sma_10,
            'pct5'     : pct5,
        }
    )

def before_trading_start(context, data):
    record(Lv   = context.account.leverage)
    record(MxLv = context.mxlv)
    record(Cash = context.portfolio.cash)
    record(nPos = len(context.portfolio.positions))  # number of positions

    context.out  = pipeline_output('pipe')
    context.hist = data.history(context.IWM, 'price', 365, '1d')

    if not len(context.out):
        log.info('nothing from pipe')
        assert(0)  # deliberate crash, context.out, pipe, is empty

    do_log_preview = 1    # a way to toggle this off when it becomes annoying
    if do_log_preview:
        try: context.log_data_done
        except:
            log_data(context, data, context.out, 4)        # show pipe info once

def close_some(context, data):
    for s, position in context.portfolio.positions.items():
        prc = data.current(s, 'price')
        cb  = position.cost_basis
        stop_price = cb * .99

        if prc <= stop_price:    # Sell at market if the last price is <= the stop price
            order_target(s, 0)
            log.info( 'SELL STOPLOSS {}  prc {} <= stop {}   cb {}'.format(s.symbol.rjust(6), 
                ('%.2f' % prc).rjust(7), ('%.2f' % stop_price).rjust(7), '%.2f' % cb) )
            # '%.2f' %    ... python notation saying just two decimal places, it does rounding.

    open_orders = get_open_orders()
    pct__change = context.hist.pct_change()[-1]
    close_these = context.out.query('sma_1 < sma_10').index
    for s in close_these:
        if not data.can_trade(s): continue
        if s in open_orders:      continue
        if pct__change < -.02:
            order_target_percent(s, 0)

def ma_crossovers(context,data):
    weight      = 1.0 / len(context.out)
    open_orders = get_open_orders()
    open_these  = context.out.query('sma_1 > sma_10').index
    #sma_7_IWM   = context.hist[-7:].mean()
    #sma_1_IWM   = context.hist[-1:].mean()

    for s in open_these:
        if not data.can_trade(s): continue
        if s in open_orders:      continue
        if s in context.portfolio.positions: continue
        prc = data.current(s, 'price')

        # Buy with stop says buy if price goes above that stop value
        order_target_percent(s,  weight, style=StopOrder(prc * 1.005))
        
        # If negative weight, saying short if prc drops. Try it instead. :}
        #order_target_percent(s, -weight, style=StopOrder(prc * 9.995))
        
        # original
        #order_target_percent(s, -weight, style=StopOrder(.06))

# <== You can click the little arrow left of the next line number 119 to collapse this out of the way ...        
def log_data(context, data, z, num, fields=None):
    ''' Log info about pipeline output or, z can be any DataFrame or Series
    https://www.quantopian.com/posts/overview-of-pipeline-content-easy-to-add-to-your-backtest
    '''
    if 'log_init_done' not in context:  # {:,} magic for adding commas
        log.info('${:,}    {} to {}'.format(int(context.portfolio.starting_cash),
                get_environment('start').date(), get_environment('end').date()))
        context.log_data_done = 1

    if not len(z):
        log.info('Empty')
        return

    # Options
    log_nan_only = 0          # Only log if nans are present
    show_sectors = 0          # If sectors, do you want to see them or not
    show_sorted_details = 1   # [num] high & low securities sorted, each column
    padmax = 6                # num characters for each field, starting point

    # Series ......
    if 'Series' in str(type(z)):    # is Series, not DataFrame
        nan_count = len(z[z != z])
        nan_count = 'NaNs {}/{}'.format(nan_count, len(z)) if nan_count else ''
        if (log_nan_only and nan_count) or not log_nan_only:
            pad = max( padmax, len('%.5f' % z.max()) )
            log.info('{}{}{}   Series  len {}'.format('min'.rjust(pad+5),
                'mean'.rjust(pad+5), 'max'.rjust(pad+5), len(z)))
            log.info('{}{}{} {}'.format(
                ('%.5f' % z.min()) .rjust(pad+5),
                ('%.5f' % z.mean()).rjust(pad+5),
                ('%.5f' % z.max()) .rjust(pad+5),
                nan_count
            ))
            log.info('High\n{}'.format(z.sort_values(ascending=False).head(num)))
            log.info('Low\n{}' .format(z.sort_values(ascending=False).tail(num)))
        return

    # DataFrame ......
    content_min_max = [ ['','min','mean','max',''] ] ; content = ''
    for col in z.columns:
        try: z[col].max()
        except: continue   # skip non-numeric
        if col == 'sector' and not show_sectors: continue
        nan_count = len(z[col][z[col] != z[col]])
        nan_count = 'NaNs {}/{}'.format(nan_count, len(z)) if nan_count else ''
        padmax    = max( padmax, len(str(z[col].max())) )
        content_min_max.append([col, str(z[col] .min()), str(z[col].mean()), str(z[col] .max()), nan_count])
    if log_nan_only and nan_count or not log_nan_only:
        content = 'Rows: {}  Columns: {}'.format(z.shape[0], z.shape[1])
        if len(z.columns) == 1: content = 'Rows: {}'.format(z.shape[0])

        paddings = [6 for i in range(4)]
        for lst in content_min_max:    # set max lengths
            i = 0
            for val in lst[:4]:    # value in each sub-list
                paddings[i] = max(paddings[i], len(str(val)))
                i += 1
        headr = content_min_max[0]
        content += ('\n{}{}{}{}{}'.format(
             headr[0] .rjust(paddings[0]),
            (headr[1]).rjust(paddings[1]+5),
            (headr[2]).rjust(paddings[2]+5),
            (headr[3]).rjust(paddings[3]+5),
            ''
        ))
        for lst in content_min_max[1:]:    # populate content using max lengths
            content += ('\n{}{}{}{}     {}'.format(
                lst[0].rjust(paddings[0]),
                lst[1].rjust(paddings[1]+5),
                lst[2].rjust(paddings[2]+5),
                lst[3].rjust(paddings[3]+5),
                lst[4],
            ))
        log.info(content)

    if not show_sorted_details: return
    if len(z.columns) == 1:     return     # skip detail if only 1 column
    if fields == None: details = z.columns
    for detail in details:
        if detail == 'sector' and not show_sectors: continue
        hi = z[details].sort_values(by=detail, ascending=False).head(num)
        lo = z[details].sort_values(by=detail, ascending=False).tail(num)
        content  = ''
        content += ('_ _ _   {}   _ _ _'  .format(detail))
        content += ('\n\t... {} highs\n{}'.format(detail, str(hi)))
        content += ('\n\t... {} lows \n{}'.format(detail, str(lo)))
        if log_nan_only and not len(lo[lo[detail] != lo[detail]]):
            continue  # skip if no nans
        log.info(content)
        
def mxlv(context, data):
    if context.account.leverage > context.mxlv:
        context.mxlv = context.account.leverage

There was a runtime error.

I have changed my code and narrowed down the stock choices that are not penny stocks.

Clone Algorithm
14
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from zipline.api import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import Returns
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.factors import SimpleMovingAverage
import pandas as pd
import numpy as np
import math 
from quantopian.pipeline.factors import CustomFactor
import quantopian.algorithm as algo
import quantopian.optimize as opt
from quantopian.pipeline.data.psychsignal import stocktwits
from quantopian.pipeline.experimental import risk_loading_pipeline
import talib

def initialize(context):
    my_pipe = make_pipeline()
    attach_pipeline(my_pipe, 'my_pipeline')   
    schedule_function(ma_crossover_handling, 
                      date_rules.every_day(), 
                      time_rules.market_open(hours=1))
    set_benchmark(sid(21519))
   

    return my_pipe

    context.max_leverage = 1.0
    context.max_pos_size = 0.015
    context.max_turnover = 0.95
    
    

    # Schedule rebalance function
    algo.schedule_function(
        rebalance,
        algo.date_rules.week_start(),
        algo.time_rules.market_open(),)

def make_pipeline():
    universe = QTradableStocksUS()
    
    pipe = Pipeline()
    
    SMA = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=200)
    
    year_return_test = Returns(
        window_length=100) *100
    
    m = year_return_test > 60
    
    year_return_test2 = Returns(
        window_length=100,
        mask = m) *100
    
    SMA_above = year_return_test2 > SMA
    
    year_return_test3 = Returns(
        window_length=100,
        mask = SMA_above) *100
    
    m2 = year_return_test3 < 115
    
    year_return = Returns(
        window_length=100,
        mask = m2) *100
    
    sma_10 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=10)
    pass 
    
    sma_7 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=7)
    pass
    
    sma_1 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=1)
   
    pass
    
    pe_ratio = Fundamentals.pe_ratio.latest
    
    
    peg_ratio = Fundamentals.peg_ratio.latest
    peg_ratio_1 = peg_ratio <= 1
    
    pct_change_2 = Returns(inputs=[USEquityPricing.close], 
                                window_length=2, 
                                mask = universe)
    pct = pct_change_2 < -.02
    
    

    
    
    return Pipeline(
        columns={
            'year_return': year_return,
            'pe_ratio': pe_ratio,
            'peg_ratio':peg_ratio_1,
            'sma_1': sma_1,
            'sma_7':sma_7,
            'sma_10':sma_10,
            'pct': pct
            
            
        }, screen=peg_ratio_1 & universe
    )
def before_trading_start(context, data):
    record(Lv   = context.account.leverage)
    record(Cash = context.portfolio.cash)
    record(nPos = len(context.portfolio.positions))  # number of positions


    # Store our pipeline output DataFrame in context
    context.output = pipeline_output('my_pipeline').dropna()

def rebalance(context, data):
    # Retrieve alpha from pipeline output
    alpha = context.pipeline_data.sentiment_score

    if not alpha.empty:
        # Create MaximizeAlpha objective
        objective = opt.MaximizeAlpha(alpha)

        # Create position size constraint
        constrain_pos_size = opt.PositionConcentration.with_equal_bounds(
            -context.max_pos_size,
            context.max_pos_size
        )

        # Constrain target portfolio's leverage
        max_leverage = opt.MaxGrossExposure(context.max_leverage)

        # Ensure long and short books
        # are roughly the same size
        dollar_neutral = opt.DollarNeutral()

        # Constrain portfolio turnover
        max_turnover = opt.MaxTurnover(context.max_turnover)

        # Constrain target portfolio's risk exposure
        # By default, max sector exposure is set at
        # 0.2, and max style exposure is set at 0.4
        factor_risk_constraints = opt.experimental.RiskModelExposure(
            context.risk_factor_betas,
            version=opt.Newest
        )

        # Rebalance portfolio using objective
        # and list of constraints
        algo.order_optimal_portfolio(
            objective=objective,
            constraints=[
                constrain_pos_size,
                max_leverage,
                dollar_neutral,
                max_turnover,
                factor_risk_constraints,
            ]
        )
 
def ma_crossover_handling(context,data):
   
    
    WEIGHT = 1.0 / len('my_pipeline')
    

    context.IWM = sid(21519)
    hist = data.history(context.IWM, 'price', 365, '1d')
    
    
    sma_5_IWM = hist[-5:].mean()
    sma_1_IWM = hist[-1:].mean()
    sma_10_IWM = hist[-10:].mean()
    
    
    change = hist.pct_change()[-2]
    
    pct_change = change < -.02
    
    
    
    #open_orders = get_open_orders()
    open_rules = 'sma_1 > sma_10'
    open_these = context.output.query(open_rules).index.tolist()
   
    for stock in open_these:
        if stock not in context.portfolio.positions and data.can_trade(stock):
            order_target_percent(stock, WEIGHT)
    for stock in open_these:
        if stock in context.portfolio.positions and data.can_trade(stock):
            if 'pct' < -.02:
                order_target_percent(stock, 0)
There was a runtime error.

If your question, James is "How to value the algorithm for trade sale?" there is a method of estimation using Net Present Value (NPV) by Discounted Cash Flow (DCF) from a series of future revenues by trading the algorithm + Terminal Value using, say the Earnings Multiple (EBITDA x Multiplier) method.

In reality, you should find that the valuation would be very tenuous because future revenues can be arguably ephemeral (if by backtests only) without ascertainable evidence of future maintainable earnings (FME) in contrast to, for example, an investment entity or trading enterprise there will be assets or inventory for due diligence to ascertain revenues to support forward cash flow and hence valuation.

In short, the algorithm would be worth only the capital invested minus risks entered into at startup, plus/minus the incremental series of profits/losses (and costs) as the algorithm starts to accumulate trading forward.

That said, this is precisely why the Quantopian Contest is such a valuable (and proxy valuation) tool for our algorithms - the contest validates the FME that your algorithm is designed to produce that effectuates its value by out-of-sample trading.

Alternatively, trade your algorithm live to prove its value!

The sum of all its future cash flows discounted back at an appropriate rate. Oh wait...

Hi @Karl,
You are absolutely correct with your comment "... In reality, you should find that the valuation would be very tenuous.
The "correct", accurate and unambiguous valuation of stocks in companies with well established ongoing businesses is often non-trivial and subject to a wide range of uncertainty depending on assumptions used. The associated lack of consensus about "true fair value" is of course one of the main reasons why stock prices fluctuate in the first place. So yes, in the case of an algo with dubious and very uncertain future earning power, the value will definitely be subject to a VERY wide range of uncertainty. Nevertheless it can still be estimated and i agree with your comment that the Q Contest is a useful proxy valuation tool.

I do not fully understand your comment that relates the value of the algo to the amount of capital invested. I think what you are driving at is that the "worth" of an algo, and therefore how much someone would logically be willing to pay for it, is not just a property of the algo itself, but is also a function of the amount of money that the buyer of the algo intends to invest when trading it. I'm sure that is true, and so a big fund is likely to pay more for a good algo than a small fund would pay. So the implication to the author is to maximize benefit, look to sell it to a BIG fund.

However, in the end, yeah, trade it live, prove it works, and then if it really makes x,000 % profit in 3 years, just keep it!! ;-))

@Robert ..... yes, waiting ........

Is 20x leverage too much?

Clone Algorithm
14
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from zipline.api import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import Returns
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.factors import SimpleMovingAverage
import pandas as pd
import numpy as np
import math 
from quantopian.pipeline.factors import CustomFactor
import quantopian.algorithm as algo
import quantopian.optimize as opt
from quantopian.pipeline.data.psychsignal import stocktwits
from quantopian.pipeline.experimental import risk_loading_pipeline
import talib

def initialize(context):
    my_pipe = make_pipeline()
    attach_pipeline(my_pipe, 'my_pipeline')   
    schedule_function(ma_crossover_handling, 
                      date_rules.every_day(), 
                      time_rules.market_open(hours=1))
    set_benchmark(sid(21519))
   

    return my_pipe

    context.max_leverage = 1.0
    context.max_pos_size = 0.015
    context.max_turnover = 0.95
    
    

    # Schedule rebalance function
    algo.schedule_function(
        rebalance,
        algo.date_rules.week_start(),
        algo.time_rules.market_open(),)

    
def make_pipeline():
    universe = QTradableStocksUS()
    
    pipe = Pipeline()
    
    SMA = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=200)
    
    year_return_test = Returns(
        window_length=252) *100
    
    m = year_return_test > 45
    
    year_return_test2 = Returns(
        window_length=252,
        mask = m) *100
    
    SMA_above = year_return_test2 > SMA
    
    year_return_test3 = Returns(
        window_length=252,
        mask = SMA_above) *100
    
    m2 = year_return_test3 < 70
    
    year_return = Returns(
        window_length=252,
        mask = m2) *100
    
    sma_10 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=10)
    pass 
    
    sma_7 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=7)
    pass
    
    sma_1 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=1)
   
    pass
    
    pe_ratio = Fundamentals.pe_ratio.latest
    
    
    peg_ratio = Fundamentals.peg_ratio.latest
    peg_ratio_1 = peg_ratio <= 1
    
    pct_change_2 = Returns(inputs=[USEquityPricing.close], 
                                window_length=2, 
                                mask = universe)
    pct = pct_change_2 < -.02
    
    

    
    
    return Pipeline(
        columns={
            'year_return': year_return,
            'pe_ratio': pe_ratio,
            'peg_ratio':peg_ratio_1,
            'sma_1': sma_1,
            'sma_7':sma_7,
            'sma_10':sma_10,
            'pct': pct
            
            
        }, screen=peg_ratio_1 & universe
    )
def before_trading_start(context, data):
    record(Lv   = context.account.leverage)
    record(Cash = context.portfolio.cash)
    record(nPos = len(context.portfolio.positions))  # number of positions


    # Store our pipeline output DataFrame in context
    context.output = pipeline_output('my_pipeline').dropna()

def rebalance(context, data):
    # Retrieve alpha from pipeline output
    alpha = context.pipeline_data.sentiment_score

    if not alpha.empty:
        # Create MaximizeAlpha objective
        objective = opt.MaximizeAlpha(alpha)

        # Create position size constraint
        constrain_pos_size = opt.PositionConcentration.with_equal_bounds(
            -context.max_pos_size,
            context.max_pos_size
        )

        # Constrain target portfolio's leverage
        max_leverage = opt.MaxGrossExposure(context.max_leverage)

        # Ensure long and short books
        # are roughly the same size
        dollar_neutral = opt.DollarNeutral()

        # Constrain portfolio turnover
        max_turnover = opt.MaxTurnover(context.max_turnover)

        # Constrain target portfolio's risk exposure
        # By default, max sector exposure is set at
        # 0.2, and max style exposure is set at 0.4
        factor_risk_constraints = opt.experimental.RiskModelExposure(
            context.risk_factor_betas,
            version=opt.Newest
        )

        # Rebalance portfolio using objective
        # and list of constraints
        algo.order_optimal_portfolio(
            objective=objective,
            constraints=[
                constrain_pos_size,
                max_leverage,
                dollar_neutral,
                max_turnover,
                factor_risk_constraints,
            ]
        )
 
def ma_crossover_handling(context,data):
   
    
    
    WEIGHT = 1.0 / len('my_pipeline')
    

    context.IWM = sid(21519)
    hist = data.history(context.IWM, 'price', 365, '1d')
    
    
    sma_5_IWM = hist[-5:].mean()
    sma_1_IWM = hist[-1:].mean()
    sma_10_IWM = hist[-10:].mean()
    
    
    change = hist.pct_change()[-2]
    
    pct_change = change < -.02
    
    
    
    #open_orders = get_open_orders()
    open_rules = 'sma_1 > sma_10'
    open_these = context.output.query(open_rules).index.tolist()
   
    for stock in open_these:
        if stock not in context.portfolio.positions and data.can_trade(stock):
            order_target_percent(stock, WEIGHT)
    for stock in open_these:
        if stock in context.portfolio.positions and data.can_trade(stock):
            if 'pct' < -.02:
                order_target_percent(stock, 0)
There was a runtime error.

Hi James,

To begin with, 20x leverage is too much, you should aim for 1x at most.

I've taken the liberty of attaching a backtest of your code with all the minor logic errors fixed.

Clone Algorithm
7
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from zipline.api import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.factors import Returns, SimpleMovingAverage

def initialize(context):
    
    
    attach_pipeline(make_pipeline(), 'my_pipeline')  
    
    schedule_function(ma_crossover_handling, 
                      date_rules.every_day(), 
                      time_rules.market_open(hours=1))
    
    
    set_benchmark(sid(21519))


    
def make_pipeline():
    universe = QTradableStocksUS()
    
    
    SMA = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=200)
    
    year_return_test = Returns(
        window_length=252) *100
    
    m = year_return_test > 45
    
    year_return_test2 = Returns(
        window_length=252,
        mask = m) *100
    
    SMA_above = year_return_test2 > SMA
    
    year_return_test3 = Returns(
        window_length=252,
        mask = SMA_above) *100
    
    m2 = year_return_test3 < 70
    
    year_return = Returns(
        window_length=252,
        mask = m2) *100
    
    sma_10 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=10)
    
    sma_7 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=7)
    
    sma_1 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=1)
   
    
    pe_ratio = Fundamentals.pe_ratio.latest
    
    
    peg_ratio = Fundamentals.peg_ratio.latest
    peg_ratio_1 = peg_ratio <= 1
    
    

    
    
    return Pipeline(
        columns={
            'year_return': year_return,
            'pe_ratio': pe_ratio,
            'peg_ratio':peg_ratio_1,
            'sma_1': sma_1,
            'sma_7':sma_7,
            'sma_10':sma_10,            
            
        }, screen=peg_ratio_1 & universe
    )


def before_trading_start(context, data):
    record(Lv   = context.account.leverage)
    record(Cash = context.portfolio.cash)
    record(nPos = len(context.portfolio.positions))  # number of positions

    context.output = pipeline_output('my_pipeline').dropna()


 
def ma_crossover_handling(context,data):
   
    
    
    WEIGHT = 1.0 / len(context.output)
    

    context.IWM = sid(21519)
    hist = data.history(context.IWM, 'price', 365, '1d')

    
    change = hist.pct_change()[-2]
    
    open_rules = 'sma_1 > sma_10'
    open_these = context.output.query(open_rules).index.tolist()
   
    for stock in open_these:
        if data.can_trade(stock):
            order_target_percent(stock, WEIGHT)
    for stock in open_these:
        if stock in context.portfolio.positions and data.can_trade(stock):
            if change < -.02:
                order_target_percent(stock, 0)
                
    for stock in context.portfolio.positions:
        if stock not in context.output.index:
            order_target_percent(stock, 0)
There was a runtime error.

That depends. If you deposit a decent amount (i.e. $10M or so) as minimum net liq, some brokers might offer you a 20x credit line to be used in a more risk neutral portfolio/strategy.

For this one, yeah it's too much.

What would the rate of interest be on the line of credit? And what would the proposed return be on the market neutral system?

Changed algorithm and got the leverage to under 10, so is the algorithm tradable?

Clone Algorithm
14
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from zipline.api import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import Returns
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.factors import SimpleMovingAverage
import pandas as pd
import numpy as np
import math 
from quantopian.pipeline.factors import CustomFactor
import quantopian.algorithm as algo
import quantopian.optimize as opt
from quantopian.pipeline.data.psychsignal import stocktwits
from quantopian.pipeline.experimental import risk_loading_pipeline
import talib

def initialize(context):
    my_pipe = make_pipeline()
    attach_pipeline(my_pipe, 'my_pipeline')   
    schedule_function(ma_crossover_handling, 
                      date_rules.every_day(), 
                      time_rules.market_open(hours=1))
    set_benchmark(sid(21519))
    
    return my_pipe

    context.max_leverage = 1.0
    context.max_pos_size = 0.015
    context.max_turnover = 0.95
    
    

    # Schedule rebalance function
    algo.schedule_function(
        rebalance,
        algo.date_rules.week_start(),
        algo.time_rules.market_open(),)

def make_pipeline():
    universe = QTradableStocksUS()
    
    pipe = Pipeline()
    
    SMA = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=200)
    
    year_return_test = Returns(
        window_length=100) *100
    
    m = year_return_test > 60
    
    year_return_test2 = Returns(
        window_length=100,
        mask = m) *100
    
    SMA_above = year_return_test2 > SMA
    
    year_return_test3 = Returns(
        window_length=100,
        mask = SMA_above) *100
    
    m2 = year_return_test3 < 115
    
    year_return = Returns(
        window_length=100,
        mask = m2) *100
    
    sma_10 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=10)
    pass 
    
    sma_7 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=7)
    pass
    
    sma_1 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=1)
   
    pass
    
    pe_ratio = Fundamentals.pe_ratio.latest
    
    
    peg_ratio = Fundamentals.peg_ratio.latest
    peg_ratio_1 = peg_ratio <= 1
    
    pct_change_2 = Returns(inputs=[USEquityPricing.close], 
                                window_length=2, 
                                mask = universe)
    pct = pct_change_2 < -.02
    
    

    
    
    return Pipeline(
        columns={
            'year_return': year_return,
            'pe_ratio': pe_ratio,
            'peg_ratio':peg_ratio_1,
            'sma_1': sma_1,
            'sma_7':sma_7,
            'sma_10':sma_10,
            'pct': pct
            
            
        }, screen=peg_ratio_1 & universe
    )
def before_trading_start(context, data):
    # Store our pipeline output DataFrame in context
    context.output = pipeline_output('my_pipeline').dropna()

def rebalance(context, data):
    # Retrieve alpha from pipeline output
    alpha = context.pipeline_data.sentiment_score

    if not alpha.empty:
        # Create MaximizeAlpha objective
        objective = opt.MaximizeAlpha(alpha)

        # Create position size constraint
        constrain_pos_size = opt.PositionConcentration.with_equal_bounds(
            -context.max_pos_size,
            context.max_pos_size
        )

        # Constrain target portfolio's leverage
        max_leverage = opt.MaxGrossExposure(context.max_leverage)

        # Ensure long and short books
        # are roughly the same size
        dollar_neutral = opt.DollarNeutral()

        # Constrain portfolio turnover
        max_turnover = opt.MaxTurnover(context.max_turnover)

        # Constrain target portfolio's risk exposure
        # By default, max sector exposure is set at
        # 0.2, and max style exposure is set at 0.4
        factor_risk_constraints = opt.experimental.RiskModelExposure(
            context.risk_factor_betas,
            version=opt.Newest
        )

        # Rebalance portfolio using objective
        # and list of constraints
        algo.order_optimal_portfolio(
            objective=objective,
            constraints=[
                constrain_pos_size,
                max_leverage,
                dollar_neutral,
                max_turnover,
                factor_risk_constraints,
            ]
        )
 
def ma_crossover_handling(context,data):
   
    
    WEIGHT = 1.0 / len('my_pipeline')
    

    context.IWM = sid(21519)
    hist = data.history(context.IWM, 'price', 365, '1d')
    
    
    sma_5_IWM = hist[-5:].mean()
    sma_1_IWM = hist[-1:].mean()
    sma_10_IWM = hist[-10:].mean()
    
    
    change = hist.pct_change()[-2]
    
    pct_change = change < -.02
    
    
    
    #open_orders = get_open_orders()
    open_rules = 'sma_1 > sma_10'
    open_these = context.output.query(open_rules).index.tolist()
   
    for stock in open_these:
        if stock not in context.portfolio.positions and data.can_trade(stock):
            order_target_percent(stock, WEIGHT)
    for stock in open_these:
        if stock in context.portfolio.positions and data.can_trade(stock):
            if 'pct' < -.02:
                order_target_percent(stock, 0)
                
    
   
            
                
There was a runtime error.

@James G -- What is an algorithm worth? That's easy. It's worth whatever somebody is willing to pay for it. It will be worth different things depending on who is buying and what kind of value it brings to the table. Is it uncorrelated to their existing strategies? Does it meet their risk requirements? How much capital do they intend to trade with it?

I didn't see anybody point this out to you, but your original algorithm here does not return 150,000% in three years as it might appear (due to a naive quirk in Quantopian's backtester). In reality, there's no recovering from being down 1x (losing all your money). You can't keep borrowing money after you've lost the collateral. In practice, an algorithm will get shut off much sooner -- 0.1x or 0.2x losses maybe. I think Quantopian has been known to kill algos as soon as they hit 0.05x losses. If you look at your performance by Oct 31st, 2016, it has lost more than 5000x the original investment. That's enough to take down the hedge fund and their bank. Lets say somebody invested $10mm in your algorithm. By Oct 31st 2016 they'd owe $50bn, which of course they don't have because that's the size of a large automobile manufacturer. Furthermore, algorithms almost always perform better in backtest than in real-life, because they will always in part be fit to the historical data. In other words, algorithms will almost always have elements that are not predictive but that were included because they improved results by pure chance.

This is why algorithms are never valued based on their simulated returns. Your algorithm as it is is worthless to any potential buyer. They all know it's a hazard. Buyers will be looking at your drawdowns, volatility, Sharpe Ratio, and out-of-sample performance. They will need evidence that your algorithm is indeed predictive and not just fit or lucky. They will need evidence that it is not a hazard. So if you believe you have discovered a consistent, robust source of alpha, you'll need to rework it in a fashion that keeps risk under control.

You should keep leverage under 1.0 until you understand the very strict maintenance requirements and borrowing costs for going over that (none of which is simulated by Quantopian).

A good way to keep risk under control is to hedge against common risk factors -- dollar exposure, beta exposure, sector exposure, etc. There's a lot written about this in the Quantopian documentation.

Quantopian is a potential buyer of your algorithm. You'll need to make it conform to their requirements. (See the "contest" for more details.) I forget exactly what they pay, but it's based on performance. (I vaguely remember it being half of their 2&20). Other potential buyers might pay you a subscription fee for the algorithms's exhaust (buy-and-sell signals) or they may pay you a larger fee for your IP.

Working from the Jamie Veitch algo above which contained leverage, one contributor to the +40% increase is that history mean().

Clone Algorithm
10
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
'''
2016-09-06 07:00 WARN Cannot place order for CSH, as it has de-listed. Any existing positions for this asset will be liquidated on 2016-09-08 00:00:00+00:00.

   ... at the time of negative cash dip.

'''

from zipline.api import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data    import Fundamentals
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.factors import Returns, SimpleMovingAverage
from pytz import timezone as _tz

def initialize(context):
    context.IWM = sid(21519)
    set_benchmark(context.IWM)

    attach_pipeline(make_pipeline(), 'my_pipeline')

    schedule_function(closing,date_rules.every_day(),time_rules.market_open(minutes=30))
    schedule_function(trade,  date_rules.every_day(),time_rules.market_open(hours=1))
    for i in range(1, 391):
        break  # off
        schedule_function(track_orders, date_rules.every_day(), time_rules.market_open(minutes=i))

def make_pipeline():
    m = QTradableStocksUS()   # m is mask

    pct_ret = Returns(window_length=252, mask=m) * 100   # using the mask
    m &= pct_ret < 70                                    # adding to mask
    m &= pct_ret > 45
    
    SMA = SimpleMovingAverage(inputs=[USEquityPricing.close],window_length=200,mask=m)
    m &= pct_ret > SMA

    sma_10 = SimpleMovingAverage(inputs=[USEquityPricing.close],window_length=10,mask=m)
    sma_7  = SimpleMovingAverage(inputs=[USEquityPricing.close],window_length= 7,mask=m)
    sma_1  = SimpleMovingAverage(inputs=[USEquityPricing.close],window_length= 1,mask=m)

    pe_ratio  = Fundamentals.pe_ratio .latest
    peg_ratio = Fundamentals.peg_ratio.latest
    m &= pe_ratio .notnull()
    m &= peg_ratio.notnull()
    m &= peg_ratio <= 1

    return Pipeline(
        screen  = m,
        columns = {
            'yr'    : pct_ret,
            'pe'    : pe_ratio,
            'peg'   : peg_ratio,
            'sma_1' : sma_1,
            'sma_7' : sma_7,
            'sma_10': sma_10,
        }, 
    )

def before_trading_start(context, data):
    record(Lv   = context.account.leverage)
    record(Cash = context.portfolio.cash)
    record(nPos = len(context.portfolio.positions))  # number of positions

    context.output = pipeline_output('my_pipeline').dropna()

    do_log_preview = 1    # a way to toggle this off when it becomes annoying
    if do_log_preview:
        try:    context.log_data_done
        except: log_data(context, context.output, 5)        # show pipe info once

def trade(context,data):
    #WEIGHT = 1.0 / len(context.output)

    #hist = data.history(context.IWM, 'price', 5, '1d')
    #change = hist.pct_change()[-2]
    
    wndw   = 21  # try others. 21: trading days per month avg
    change = data.history(context.IWM, 'price', wndw, '1d').pct_change().mean()

    #candidates = context.output.query('sma_1 > sma_10').index.tolist()
    
    # Try all sorts of pipeline columns here instead 
    candidates = context.output.query('sma_1 / sma_10 < sma_7')  #.index.tolist()
    
    # In case anyone's interested in current weights
    for s in context.portfolio.positions:
        candidates['now'] = (data.current(s, 'price') * context.portfolio.positions[s].amount) / context.portfolio.portfolio_value
        
    candidates['new'] = candidates.sma_10 / candidates.sma_1
    candidates.new   /= candidates.new.sum()
    
    oos = get_open_orders()
    
    for s in candidates.index:
        if s in oos: continue
        if not data.can_trade(s): continue
        if change < -.02:
            order_target_percent(s, 0)
            continue
        else:
            order_target_percent(s, candidates.new[s])

def closing(context,data):
    for s in context.portfolio.positions:
        if s not in context.output.index:
            order_target(s, 0)
    
def log_data(context, z, num, fields=None):
    ''' Log info about pipeline output or, z can be any DataFrame or Series
    https://quantopian.com/posts/overview-of-pipeline-content-easy-to-add-to-your-backtest
    '''
    if not len(z):
        log.info('Empty pipe')
        return

    try: context.log_data_done
    except:
        log.info('starting_cash ${:,}   portfolio ${:,}     {} positions ...'.format(
            int(context.portfolio.cash),
            int(context.portfolio.portfolio_value),
            len(context.portfolio.positions)
        ))
        context.log_data_done = 1

    # Options
    log_nan_only = 0          # Only log if nans are present.
    show_sectors = 0          # If sectors, see them or not.
    show_sorted_details = 1   # [num] high & low securities sorted, each column.
    padmax = 6                # num characters for each field, starting point.

    # Change index to just symbols for readability, meanwhile, right-aligned
    z = z.rename(index=dict(zip(z.index.tolist(), [i.symbol.rjust(6) for i in z.index.tolist()])))

    # Series ......
    if 'Series' in str(type(z)):    # is Series, not DataFrame
        nan_count = len(z[z != z])
        nan_count = 'NaNs {}/{}'.format(nan_count, len(z)) if nan_count else ''
        if (log_nan_only and nan_count) or not log_nan_only:
            pad = max( padmax, len('%.5f' % z.max()) )
            log.info('{}{}{}   Series  len {}'.format('min'.rjust(pad+5),
                'mean'.rjust(pad+5), 'max'.rjust(pad+5), len(z)))
            log.info('{}{}{} {}'.format(
                ('%.5f' % z.round(6). min()).rjust(pad+5),
                ('%.5f' % z.round(6).mean()).rjust(pad+5),
                ('%.5f' % z.round(6). max()).rjust(pad+5),
                nan_count
            ))
            log.info('High\n{}'.format(z.sort_values(ascending=False).head(num)))
            log.info('Low\n{}' .format(z.sort_values(ascending=False).tail(num)))
        return

    # DataFrame ......
    content_min_max = [ ['','min','mean','max',''] ] ; content = ''
    for col in z.columns:
        try: z[col].max()
        except: continue   # skip non-numeric
        if col == 'sector' and not show_sectors: continue
        nan_count = len(z[col][z[col] != z[col]])
        nan_count = 'NaNs {}/{}'.format(nan_count, len(z)) if nan_count else ''
        padmax    = max( padmax, len(str(z[col].max())) ) ; mean_ = ''
        if len(str(z[col].max())) > 8 and 'float' in str(z[col].dtype):
            z[col] = z[col].round(6)   # Reduce number of decimal places for floating point values
        if 'float' in str(z[col].dtype): mean_ = str(round(z[col].mean(), 6))
        elif 'int' in str(z[col].dtype): mean_ = str(round(z[col].mean(), 1))
        content_min_max.append([col, str(z[col] .min()), mean_, str(z[col] .max()), nan_count])
    if log_nan_only and nan_count or not log_nan_only:
        content = 'Rows: {}  Columns: {}'.format(z.shape[0], z.shape[1])
        if len(z.columns) == 1: content = 'Rows: {}'.format(z.shape[0])

        paddings = [6 for i in range(4)]
        for lst in content_min_max:    # set max lengths
            i = 0
            for val in lst[:4]:    # value in each sub-list
                paddings[i] = max(paddings[i], len(str(val)))
                i += 1
        headr = content_min_max[0]
        content += ('\n{}{}{}{}{}'.format(
             headr[0] .rjust(paddings[0]),
            (headr[1]).rjust(paddings[1]+5),
            (headr[2]).rjust(paddings[2]+5),
            (headr[3]).rjust(paddings[3]+5),
            ''
        ))
        for lst in content_min_max[1:]:    # populate content using max lengths
            content += ('\n{}{}{}{}     {}'.format(
                lst[0].rjust(paddings[0]),
                lst[1].rjust(paddings[1]+5),
                lst[2].rjust(paddings[2]+5),
                lst[3].rjust(paddings[3]+5),
                lst[4],
            ))
        log.info(content)

    if not show_sorted_details: return
    if len(z.columns) == 1:     return     # skip detail if only 1 column
    if fields == None: details = z.columns
    for detail in details:
        if detail == 'sector' and not show_sectors: continue
        hi = z[details].sort_values(by=detail, ascending=False).head(num)
        lo = z[details].sort_values(by=detail, ascending=False).tail(num)
        content  = ''
        content += ('_ _ _   {}   _ _ _'  .format(detail))
        content += ('\n\t... {} highs\n{}'.format(detail, str(hi)))
        content += ('\n\t... {} lows \n{}'.format(detail, str(lo)))
        if log_nan_only and not len(lo[lo[detail] != lo[detail]]):
            continue  # skip if no nans
        log.info(content)

def track_orders(context, data):
    '''  Show orders when made and filled.
           Info: https://www.quantopian.com/posts/track-orders
    '''
    c = context
    try: c.trac
    except:
        c.t_opts = {        # __________    O P T I O N S    __________
            'symbols'     : [],   # List of symbols to filter for, like ['TSLA', 'SPY']
            'log_neg_cash': 1,    # Show cash only when negative.
            'log_cash'    : 1,    # Show cash values in logging window or not.
            'log_ids'     : 1,    # Include order id's in logging window or not.
            'log_unfilled': 1,    # When orders are unfilled. (stop & limit excluded).
            'log_cancels' : 0,    # When orders are canceled.
        }    # Move these to initialize() for better efficiency.
        c.trac = {}
        c.t_dates  = {  # To not overwhelm the log window, start/stop dates can be entered.
            'active': 0,
            'start' : [],   # Start dates, option like ['2007-05-07', '2010-04-26']
            'stop'  : []    # Stop  dates, option like ['2008-02-13', '2010-11-15']
        }
        log.info('track_orders active. Headers ...')
        log.info('             Shares     Shares')
        log.info('Min   Action Order  Sym  Now   at Price   PnL   Stop or Limit   Cash  Id')
    #from pytz import timezone as _tz  # Python only does once, makes this portable.
                                      #   Move to top of algo for better efficiency.
    # If 'start' or 'stop' lists have something in them, triggers ...
    if c.t_dates['start'] or c.t_dates['stop']:
        _date = str(get_datetime().date())
        if   _date in c.t_dates['start']:    # See if there's a match to start
            c.t_dates['active'] = 1
        elif _date in c.t_dates['stop']:     #   ... or to stop
            c.t_dates['active'] = 0
    else: c.t_dates['active'] = 1           # Set to active b/c no conditions.
    if c.t_dates['active'] == 0: return     # Skip if not active.
    def _minute():   # To preface each line with the minute of the day.
        bar_dt = get_datetime().astimezone(_tz('US/Eastern'))
        return (bar_dt.hour * 60) + bar_dt.minute - 570 # (-570 = 9:31a)
    def _trac(to_log):      # So all logging comes from the same line number,
        log.info(' {}   {}'.format(str(_minute()).rjust(3), to_log))  # for vertical alignment in the logging window.

    for oid in c.trac.copy():               # Existing known orders
      o = get_order(oid)
      if c.t_opts['symbols'] and (o.sid.symbol not in c.t_opts['symbols']): continue
      if o.dt == o.created: continue        # No chance of fill yet.
      cash = ''
      prc  = data.current(o.sid, 'price') if data.can_trade(o.sid) else c.portfolio.positions[o.sid].last_sale_price
      if (c.t_opts['log_neg_cash'] and c.portfolio.cash < 0) or c.t_opts['log_cash']:
        cash = str(int(c.portfolio.cash))
      if o.status == 2:                     # Canceled
        do = 'Buy' if o.amount > 0 else 'Sell' ; style = ''
        if o.stop:
          style = ' stop {}'.format(o.stop)
          if o.limit: style = ' stop {} limit {}'.format(o.stop, o.limit)
        elif o.limit: style = ' limit {}'.format(o.limit)
        if c.t_opts['log_cancels']:
          _trac('  Canceled {} {} {}{} at {}   {}  {}'.format(do, o.amount,
             o.sid.symbol, style, prc, cash, o.id[-4:] if c.t_opts['log_ids'] else ''))
        del c.trac[o.id]
      elif o.filled:                        # Filled at least some.
        filled = '{}'.format(o.amount)
        filled_amt = 0
        if o.status == 1:                   # Complete
          if 0 < c.trac[o.id] < o.amount:
            filled   = 'all {}/{}'.format(o.filled - c.trac[o.id], o.amount)
          filled_amt = o.filled
        else:                                    # c.trac[o.id] value is previously filled total
          filled_amt = o.filled - c.trac[o.id]   # filled this time, can be 0
          c.trac[o.id] = o.filled                # save fill value for increments math
          filled = '{}/{}'.format(filled_amt, o.amount)
        if filled_amt:
          now = ' ({})'.format(c.portfolio.positions[o.sid].amount) if c.portfolio.positions[o.sid].amount else ' _'
          pnl = ''  # for the trade only
          amt = c.portfolio.positions[o.sid].amount ; style = ''
          if (amt - o.filled) * o.filled < 0:    # Profit-taking scenario including short-buyback
            cb = c.portfolio.positions[o.sid].cost_basis
            if cb:
              pnl  = -filled_amt * (prc - cb)
              sign = '+' if pnl > 0 else '-'
              pnl  = '  ({}{})'.format(sign, '%.0f' % abs(pnl))
          if o.stop:
            style = ' stop {}'.format(o.stop)
            if o.limit: style = ' stop () limit {}'.format(o.stop, o.limit)
          elif o.limit: style = ' limit {}'.format(o.limit)
          if o.filled == o.amount: del c.trac[o.id]
          _trac('   {} {} {}{} at {}{}{}'.format(
            'Bot' if o.amount > 0 else 'Sold', filled, o.sid.symbol, now,
            '%.2f' % prc, pnl, style).ljust(52) + '  {}  {}'.format(cash, o.id[-4:] if c.t_opts['log_ids'] else ''))
      elif c.t_opts['log_unfilled'] and not (o.stop or o.limit):
        _trac('      {} {}{} unfilled  {}'.format(o.sid.symbol, o.amount,
         ' limit' if o.limit else '', o.id[-4:] if c.t_opts['log_ids'] else ''))

    oo = get_open_orders().values()
    if not oo: return                       # Handle new orders
    cash = ''
    if (c.t_opts['log_neg_cash'] and c.portfolio.cash < 0) or c.t_opts['log_cash']:
      cash = str(int(c.portfolio.cash))
    for oo_list in oo:
      for o in oo_list:
        if c.t_opts['symbols'] and (o.sid.symbol not in c.t_opts['symbols']): continue
        if o.id in c.trac: continue         # Only new orders beyond this point
        prc = data.current(o.sid, 'price') if data.can_trade(o.sid) else c.portfolio.positions[o.sid].last_sale_price
        c.trac[o.id] = 0 ; style = ''
        now  = ' ({})'.format(c.portfolio.positions[o.sid].amount) if c.portfolio.positions[o.sid].amount else ' _'
        if o.stop:
          style = ' stop {}'.format(o.stop)
          if o.limit: style = ' stop {} limit {}'.format(o.stop, o.limit)
        elif o.limit: style = ' limit {}'.format(o.limit)
        _trac('{} {} {}{} at {}{}'.format('Buy' if o.amount > 0 else 'Sell',
          o.amount, o.sid.symbol, now, '%.2f' % prc, style).ljust(52) + '  {}  {}'.format(cash, o.id[-4:] if c.t_opts['log_ids'] else ''))
        
There was a runtime error.

Hi James,

Thought I might take a moment to explain everything I fixed about your algorithm:

Imports

You imported more things than you actually used:

import pandas as pd
import numpy as np
import math
from quantopian.pipeline.factors import CustomFactor
import quantopian.algorithm as algo
import quantopian.optimize as opt
from quantopian.pipeline.data.psychsignal import stocktwits
from quantopian.pipeline.experimental import risk_loading_pipeline
import talib

All of these imports are redundant.

Initialisation

The return statement in initialise made everything else below it redundant. Therefore:

context.max_leverage = 1.0
context.max_pos_size = 0.015
context.max_turnover = 0.95

algo.schedule_function(
rebalance,
algo.date_rules.week_start(),
algo.time_rules.market_open())

Was never called or activated

Pipeline

There were a number of redundancies inside the pipeline, namely all the pass statements and:

pipe = Pipeline()

The code operates fine without all of these

Rebalance

Nothing in here was ever called, but there were still a few logic errors that I thought I should point out

alpha = context.pipeline_data.sentiment_score

pipeline_data is not defined, and as a result, pipeline_data does not have any attribute sentiment score, so this threw an error.

MA Crossover Handling

Weighting

The algorithm does not handle weighting correctly

WEIGHT = 1.0 / len('my_pipeline')

Will simply divide 1 by the length of the string 'my_pipeline', namely 11.

History

sma_5_IWM = hist[-5:].mean()
sma_1_IWM = hist[-1:].mean()
sma_10_IWM = hist[-10:].mean()

These blocks of code are never used, so can be removed.

PCT comparison

if 'pct' < -.02:

This is not comparing the percentage change of IWM to -0.02, what this is actually doing is comparing the string 'pct' to -0.02, when comparing two non-numerical objects of different types, the comparison is performed by comparing the names of the types. For example, 'int' < 'string', any int is less than any string.

Closing positions

At no point does your algorithm iterate through your portfolio and close all stocks which are not in the pipeline output. As a result, once a stock enters your portfolio, it never leaves.

Summary

Once all these logic errors are fixed, you end up with a relatively good algorithm. The versions you have shared will be of no interest to any hedge fund, primarily because of the massive leverage you're deploying which makes as @Viridan Hawk said, the returns redundant because you wipe out before you get any of the upside.

I also want to reiterate James Villa's comment about comparing returns to SMA of closing price. It's like comparing apples to oranges. Take AAPL for example -- just roughly by glancing at the chart, the 200-day SMA will be around $180, and the 100-day returns will be roughly -0.25 (negative 25%). Comparing these two numbers ($180 vs -25%) is meaningless. So likely that was a mistake. Looking closer at the code, it looks like that bit of logic doesn't even factor into the algorithm when it comes down to it. It gets ignored. Was that intentional?

Also, be aware of framing bias. Your recent ("under 10 leverage") backtest looks impressive, but that's because you've framed it to run only during it's best period. It goes bankrupt right before your backtest dates, and goes bankrupt again right after your backtest dates. I don't know if this was deliberate or if you are innocently fooling yourself. However, this will not fool professionals.

I don't mean to be discouraging. Keep at it and you may find something. This, however, isn't it. It takes hard work and knowing what you're doing. I recommend taking Jamie's advice and removing all the code in there that isn't doing anything, since it is just confusing. In addition, you need to become fully aware of potential biases (framing, p-hacking/curve fitting, lookahead, etc. etc.etc.) and shortcomings/inaccuracies of the backtesting framework in order to produce algorithms that avoid these pitfalls. If a strategy works because of a bug (such as WEIGHT = 1.0 / len('my_pipeline') or if 'pct' < -.02) then likely you've just gotten lucky instead of finding a real market inefficiency. At the very least, you'll need to understand those bugs or else you'll misunderstand completely what your algorithm is doing. Good luck!

@viridian hawk
Are there other potential buyers that you're aware of?

@ Viridian Hawk, what do you mean by " It goes bankrupt right before your backtest dates, and goes bankrupt again right after your backtest dates"? And do you have any guidance on how I can change my algorithm in that way?

@James Gastineau - if you start your backtest a year earlier and run it until today, look what happens. By bankrupt I mean getting wiped out AKA hitting -100% returns. Even though the returns chart will show a recovery after getting below -100%, in reality this would not be possible. As for guidance, I'd start by looking at what Jamie and Blue posted.

Clone Algorithm
2
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from zipline.api import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.factors import SimpleMovingAverage
import quantopian.algorithm as algo
import quantopian.optimize as opt

def initialize(context):
    my_pipe = make_pipeline()
    attach_pipeline(my_pipe, 'my_pipeline')   
    schedule_function(ma_crossover_handling, 
                      date_rules.every_day(), 
                      time_rules.market_open(hours=1))
    set_benchmark(sid(21519))
    
    return my_pipe


def make_pipeline():
    universe = QTradableStocksUS()
    
    pipe = Pipeline()

    sma_10 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=10)
    sma_1 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=1)
    
    peg_ratio = Fundamentals.peg_ratio.latest
    peg_ratio_1 = peg_ratio <= 1
    
    return Pipeline(
        columns={
            'sma_1': sma_1,
            'sma_10':sma_10,            
        }, screen=peg_ratio_1 & universe
    )


def before_trading_start(context, data):
    # Store our pipeline output DataFrame in context
    context.output = pipeline_output('my_pipeline').dropna()
    
    record(plen = len(context.output.index))
    record(l = context.account.leverage)


def ma_crossover_handling(context,data):
    WEIGHT = 1.0 / 11
            
    open_rules = 'sma_1 > sma_10'
    open_these = context.output.query(open_rules).index.tolist()
   
    for stock in open_these:
        if stock not in context.portfolio.positions and data.can_trade(stock):
            order_target_percent(stock, WEIGHT)
There was a runtime error.

Here is your long-only short-term momentum strategy for value stocks properly coded without the bugs.

(Except as Jamie pointed out, you should also remove return my_pipe from initialize and remove pipe = Pipeline()from make_pipeline, since those lines of code don't do anything.)

Clone Algorithm
2
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from zipline.api import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.filters import QTradableStocksUS
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.factors import SimpleMovingAverage
import quantopian.algorithm as algo
import quantopian.optimize as opt

def initialize(context):
    my_pipe = make_pipeline()
    attach_pipeline(my_pipe, 'my_pipeline')   
    schedule_function(ma_crossover_handling, 
                      date_rules.every_day(), 
                      time_rules.market_open(hours=1))
    set_benchmark(sid(21519))
    
    return my_pipe


def make_pipeline():
    universe = QTradableStocksUS()
    
    pipe = Pipeline()

    sma_10 = SimpleMovingAverage(
        inputs=[USEquityPricing.close],
                window_length=10)
   
    peg_ratio = Fundamentals.peg_ratio.latest
    peg_ratio_filter = peg_ratio <= 1
    
    sma_filter = USEquityPricing.close > sma_10
    
    return Pipeline(
        columns={         
        }, screen=universe & peg_ratio_filter & sma_filter
    )


def before_trading_start(context, data):
    # Store our pipeline output DataFrame in context
    context.output = pipeline_output('my_pipeline').dropna()
    
    record(plen = len(context.output.index))
    record(l = context.account.leverage)


def ma_crossover_handling(context,data):
    weights = {}
    for stock in context.output.index:
        weights[stock] = 1.0 / len(context.output.index)
    
    
    order_optimal_portfolio(
        objective=opt.TargetWeights(weights),  
        constraints=[
            opt.MaxGrossExposure(1.0),
            opt.NetExposure(1.0,1.0),
        ],  
    )
There was a runtime error.

Sigh. Traps for the novice eh?

A few replies here got deleted, but I figure I'll add to put a positive spin on things...

In science every experiment is a success. The success here was in discovering that holding a basket of value stocks that are above their 10-day moving average is a really lousy, money-losing strategy. Two things are working against this strategy -- the underperformance of value during the recent era and volatility being stronger than momentum at this frequency (AKA getting "whipsawed"). So we can take what we learned from this experiment and use it to inform the next strategy we test.

Another lesson is that one should always be skeptical about backtest results. It's important to know the limitations of your tools -- know how to read them and know when they're lying to you. On the stock market you're up against some of the smartest, most greedy, and most unscrupulous people in the world. You wouldn't expect a smart, greedy, unscrupulous person to sell you a car today that we know will be worth twice as much tomorrow. Same goes for stocks.

And finally, it's important that we be skeptical about ourselves and the biases we introduce into our algorithms. There are a myriad of things we can do to improve our backtest results without making the strategy more predictive.. such as running the backtest over a favorable time period or curve-fitting. It's important to avoid all such inclinations.

Only by giving our results an impartial assessment can we learn, grow, iterate, and improve upon them.

I was recently talking to a friend on the Trading Blox Forum who has backtested and traded for 25 years. He has totally lost confidence in backtesting. I too do not believe that back tests have any validity whatsoever in prediction. I have been trading since the 1990s but only began backtesting early in the 2000s. I made many beginners mistakes as we all do. But I also believe back testing to be largely a waste of time except in the most general terms.

Any backtesting without considering the bear market in 2008 can not be trusted. :-/

Further to Thomas' comment I would recommend including 2015 as well to check alpha as US equities ended flat that year.

@Savio,
you are right. When I do backtesting, I will chose from 2005 to now. I find the 4th quatal of 2018 is also "interessting".