Hi, I have built a simple script to test this issue. I calculate the SMA14 and SMA21 and trade whenever SMA14 is less than the SMA21. It marks this spot as a crossover, and a buy isn't considered again until SMA14 is greater than SMA21. Once it crosses under again, that's another buy.
Comparing this to other systems backtesting systems(tv, qc), I end up with different results.

On quantopian, using the minute ticks, we get the 10:29 tick at 10:30. Meaning the last value of sma14 I have belongs to 10:29. This is fair on it's own. Then I place an order for SPY with (the "real" time by this point is 10:30 and I am done calculating sma14 < sma21 for 10:29 values)

This, at earliest, is filled at the end of the next candle. Later if it's a limit order, even if the limit is at a price that's greater than the "low" value of the next candle. So I have the calculation done at 10:30 for 10:29 prices, placing the order at 10:30, and the order is executed at the 10:31 or 10:32 close price at earliest. In this example the trade happens at 10:34, a full 4 minutes late, even though the price I offered is greater than the low price of the candles before it finally fills.

Now, I understand the logic behind having the calculations a minute late, as we need the candle to finish to know the final value. That's not the issue. But once I do have the value and the conditions are met(at 10:30), I don't see why I would need to wait multiple full ticks for the order to fill. SPY orders fill instantly in the real world. By the time a minute or more have passed, that's a huge swing opportunity lost for algorithms that try to scalp 5-15 minute swings.

Realistically I can place an order at the close price of the candle I am at and have it filled pretty much all the time within a few seconds on the next candle. I'd like to reflect this in the backtest.

I have compared this to a live test data for another algorithm and the real results were in the green while the quantopian equivalent ended up in the red. Having the limit order at cheaper prices helps a little, but even then due to the time issue remains.

4
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
import quantopian.algorithm as algo
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from pytz import timezone
import talib
import math

minutesPassed = 0
inPosition = False
inPositionSince = 0
shareCount = 0
crossUnder = False

def initialize(context):
algo.schedule_function(
rebalance,
algo.date_rules.every_day(),
algo.time_rules.market_open(hours=1),
)
algo.schedule_function(
record_vars,
algo.date_rules.every_day(),
algo.time_rules.market_close(),
)
algo.attach_pipeline(make_pipeline(), 'pipeline')

# 8554 = spy
context.assets = [sid(8554)]

# set the commissions to 0 to match tradingview
set_commission(
)

def make_pipeline():

yesterday_close = USEquityPricing.close.latest
pipe = Pipeline(
columns={
'close': yesterday_close,
},
screen=base_universe
)
return pipe

global minutesPassed
global inPositionSince
global inPosition

inPosition = False
inPositionSince = 0
minutesPassed = 0
context.output = algo.pipeline_output('pipeline')

context.security_list = context.output.index

def rebalance(context, data):
pass

def record_vars(context, data):
pass

def handle_data(context, data):
"""
Called every minute.
"""
global minutesPassed
global inPosition
global inPositionSince
global shareCount
global crossUnder

minutesPassed += 1

# have the sell condition above the skip condition to make sure we don't end up with dangling
# positions at eod
if (inPosition is True and minutesPassed > (inPositionSince + 5)):
# previously used order(context.assets[0], -1 * shareCount)
# but shorts equities when we have an unfilled order, just selling it all is easier
order_target_percent(context.assets[0], 0)
shareCount = 0
inPosition = False

if (minutesPassed < 60 or minutesPassed > 360):
return

# data.history(context.assets, "price", 1, "1m"))
#ticks9 = data.history(context.assets[0], 'price', 9, '1m')
ticks14 = data.history(context.assets[0], 'price', 14, '1m')
ticks21 = data.history(context.assets[0], 'price', 21, '1m')
ticks60 = data.history(context.assets[0], 'price', 60, '1m')

ticks60high = data.history(context.assets[0], 'high', 60, '1m')
ticks60open = data.history(context.assets[0], 'open', 60, '1m')

sma14 = talib.MA(ticks60, 14)
sma21 = talib.MA(ticks60, 21)

sma14m = ticks14.mean()
sma21m = ticks21.mean()

if (crossUnder is False
and inPosition is False
and sma14[-1] < sma21[-1]):
crossUnder = True
inPosition = True
inPositionSince = minutesPassed
# sma21m and sma21[-1] should be the same value
log.info("Minutes Passed:" + str(minutesPassed) + " Time(T-1 for TradingView): "
+ str(get_datetime('US/Eastern')) + " Current Close: " + str(ticks60[-1])
+ " SMA 14 and 21: " + str(sma14[-1]) + " " + str(sma21[-1]) + " mean: "
+ str(sma14m) + " " + str(sma21m))

# i just cut-paste this to above the condition when I want to test for ordering at the next tick
# or keep it here for ordering immediately
# however quantopian seems to have a delay, even too much delay on it's own
cash = context.portfolio.cash
targetPrice = ticks60open[-1]
shareCount = cash // targetPrice
# without the limit, it tends to happen at the prices that are near or at 'high'
algo.order(context.assets[0], shareCount, data.current(symbol('SPY'), 'low') + 0.04)

if (crossUnder is True and sma14[-1] > sma21[-1]):
crossUnder = False

# cancel all the orders if they don't fill
# has to wait at least 2 minutes, as the orders get "posted" after a minute, so
# minutesPassed > (inPositionSince + 1) cancels all orders except for the very first order for
# the day. why? beats me.
if (inPosition is True and minutesPassed > (inPositionSince + 4)):
oo = get_open_orders()             # Dict of security objects with open orders
for sec in oo:                     # Each security object has a list
for order in oo[sec]:          # Each order in the list
cancel_order(order.id)     # The order id
log.debug("cancelled an order with id " + str(order.id))

pass
There was a runtime error.
1 response

I did some more tests in the meantime.

This compares the closing candle against the low price of the next candle. (1)

The first set of tests assumes you place the order at the o/h/l/c values for the current tick. (2)

Assuming you place the order for the open(T) price,  your order fills 0.708 of the time at (T+1)
Assuming you place the order for the high(T) price,  your order fills 0.990 of the time at (T+1)
Assuming you place the order for the low(T) price,   your order fills 0.440 of the time at (T+1)
Assuming you place the order for the close(T) price, your order fills 0.891 of the time at (T+1)


As we can see if we place the order for the high value of the previous candle, we have it filled 99% of the time. If we do it based on the closing price, this drops to 89.1%. The rest of the time we would have to cancel the order after a minute. Paying for the cancellation fee between 1% to 15% of the time or extending the order time, depending on the route we go and how bad the few seconds delay changes the situation. In addition if one tests for placing the order at the high price point, it is likely the order will fill under that price point in real life, making up for the cancellation fees we have to pay for the rare times it doesn't fill at all.

If we are operating by the previous candles values and our orders are considered at earliest at the the next candle though, the situation changes a lot. This is how I understand the quantopian orders to work. Please correct me if I am wrong, I hope I am.

Assuming you place the order for the open(T-1)  price, your order fills 0.659 of the time at (T+1)
Assuming you place the order for the high(T-1)  price, your order fills 0.864 of the time at (T+1)
Assuming you place the order for the low(T-1)   price, your order fills 0.462 of the time at (T+1)
Assuming you place the order for the close(T-1) price, your order fills 0.707 of the time at (T+1)


Using the close price now only fills 70.7% of the time from 89.1%. A drop of 24.01%. That's a huge difference when we are measuring methods that are meant to make or lose anywhere from around a percent to less than a percent in a whole day. (3)

More importantly, the price is higher than even the previous high a whopping 14% of the time, while it was higher only 1% of the time using the candle that just finished(assuming we do the calculations as soon as the candle finishes). If our delay is measured in seconds and we place the orders for the high price, theoretically we are going to miss it only 1% of the time(4) although it will be worse in the real case. On the other hand I don't expect the quantopian case to get worse, as it's more than possible to beat the 14% rate if we have our delay under 60 seconds, which is a lot of time.

Case in point, I was able to come up with a test that basically tracks SPY and is only slightly(<5%) outperforming it on Quantconnect. The same test on Quantopian is underperforming SPY.
Both using the standard order call, no limit orders, fees set to 0 on Quantopian and fees left at default on Quantconnect.

(1) Tested for a year of minutely candles, ignoring the first and last 30 minutes of each day, because we don't get the indicators at proper values for the first 30 minutes at the earliest and doing buys near market close is risky. When the price I am trying to fill is equal to the low price of the next, it's assumed the order doesn't fill(though this is rare).

(2) It's possible to calculate the values for indicators under a second unless it's a complicated method, and it's possible to place an order in a time measured in a few seconds as well. We likely need to account for a delay of retrieving the data too. Even so, this is far from trying to time things to under a second or HFT. I consider keeping the delay above a second at a minumum, but below 10 seconds a possible task using retail systems. Instead of assuming a delay of a whole minute which I believe might be the case here.

(3) Realistically, with our few secons delay, we wouldn't be able to fill it exactly 89.1% of the time, but we might very well miss it by only a few percentage points, not by 24%. I am planning to test this on real life situations or real-time paper trading. In addition, if we also assume a cancellation fee in our model, we either have to assume a higher limit or paying the fee 24% of the time. Both seem to be weaker models than just assuming a cancellation fee based on placing the order only a few seconds late. If I understood the mechanic of quantopian order filling correctly. Again please correct me if this is all wrong, I would be glad to hear that.

(4) With our few seconds delay, the 1% rate is probably going to be higher, but I don't expect it to increase by 1400% (14x) which would be the rate for skipping the candle. I think assuming something like 500%(5x) is reasonable as a worst-case mean, which would put us at around a 5% cancellation rate. Have to test for this as well.