Back to Community
get_open_orders() on a sub-minutely level

Hey all, I was wondering if it is possible to know it as soon as an order gets filled. Since the "handle_data" method runs minute to minute, I suppose get_open_orders() doesn't do the trick if placed under it. Any other way out of this?

Also, when reading the "Order object" section of the API documentation, I saw the status property doesn't include partial fill. What would happen then if there's a partial fill when status is referenced?

10 responses

https://www.quantopian.com/posts/track-orders is the best so far. The debugger can be helpful to understand what it is reporting on partial fill, comparing to 'o', the order.

Blue - thanks for the response. I have to be honest, since I am new to Python and Quantopian, I have trouble following the syntax. Do you mind if I post some questions here?

Best thing would be with some example code where you would run a full backtest and then attach here.

OK, I will try to plug this into my strategy. Meanwhile, could you explain line nine real quick?

if 'trac' not in c:

Is this line used to prevent double logging? If so, under what circumstance would it double log? I am not sure why it would, though to be fair I am still trying to wrap my head around the whole thing.

Yeah, track_orders() code is not super easy to grasp so I'll see if this might help ...

The condition you mentioned is just a way to make the whole thing portable, that line winds up being run 390 times a day throughout the backtest (or live) and is only true the first time, where c is the shortcut name being used for context and context.trac is the variable used to store order id's, like c.trac[o.id]. Initially the id is added when an order has been created, and then after that minute the order is checked for fills at o = get_order(oid), and deleted when complete. That line if 'trac' not in c: could have looked for t_options or t_dates instead of trac to find out if they had been initialized yet. As mentioned, one can Move these to initialize() for better efficiency so the check does not need to run 390 times a day. And the import also, can go to the top with any other imports.

The lack of double logging is a key question. An order can sit unfilled for most of the day and the only times there will be logging are when the order has first been created and also when any part of the order is filled and to indicate when done. So track_orders() can be called many times in a row, even many times in the same minute and any activity on an order only registers the first time that change happens because the previous o.filled is stored in c.trac[o.id]['amnt']. The letter 'o' stands for order and order ids (o.id) are unique since they are keys in a dictionary.

Part of the question is that one might wonder, since this is called out of handle_data every minute, then why the need to also make the call to it from one's own functions after any ordering?

The answer is because handle_data is processed at the beginning of each minute while scheduled functions run at the end of every minute so an order placed out of a scheduled function such as trade() for example in one minute (would be at the end of that minute) and can be completely filled and gone from the get_open_orders() radar by the time the next minute rolls around. Orders would be missed by this tool if being called only from handle_data. The need to call track_orders() after any orders from within the functions where the orders are made is so the second of the two main blocks of code, the one marked 'Handle new orders', can become aware of each order created and store its id as c.trac[o.id]. there is a way that might be considered better, scheduling every minute in a loop

Mess around with this backtest, inject prints statements or use the debugger and you'll eventually trust the code pretty well. Logging output is pasted at the bottom in the 'Source Code' tab for convenience to those just passing by.

Clone Algorithm
41
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 593d009f3d3fa069ea446909
There was a runtime error.

Excuse me if I am wrong. I've been studying the code and still can't figure out how a signal is generated as soon as an order gets filled. Best it can do is to get a signal at the close of current bar. This seems as good as it could be on Quantopian though. Did I miss anything?

Updated version here.

Clicked to set breakpoints on lines 75 and 112. Take a look at my notes below. Replicate this and type variables at the prompt to examine them, should become more clear. Line 112 is minute 1. Line 75 is minute 2. Orders were retrieved using their saved order id's on line 54.

Clone Algorithm
9
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 5945c71f81a2006fdadb7bf4
There was a runtime error.

I think I finally have a decent grasp of the code now. This is amazing work and I can't believe it hasn't been incorporated into Zipline. And just to bring it home, in the post above where you mentioned " So track_orders() can be called many times in a row, even many times in the same minute ", do you mean something like pairing track_order with the "time" library and achieve sub-minutely resolution or did I read it wrong?

You have a personal advantage in understanding track_orders so congratulations. In that point I was merely assuring that it won't double-log. There is one sub-minutely situation where it doesn't work fully, with order_optimal_portfolio which fills its own orders immediately instead of waiting for the next minute, so track_orders doesn't have a chance to log the new buy and sell orders, the first thing it sometimes sees with order_optimal_portfolio is bought and sold.

My typical best use of track_orders is specific:
1. Enter a start date in the options list to focus on an anomaly a day or two before it.
2. Logging of order id's option is turned on.
3. Run the backtest to a few days after that date.
4. Copy logging output to Notepad++.
5. Click order id's, simultaneously highlights all other instances, making it easy to see what's happening with each order.
6. Slow down (the toughest part) and think
7. Find a programmatic change that might address the problem, possibly in pipeline screening of volume for example, since issues are usually (or always perhaps) from partial fills.
8. Run again.
9. Copy the two outputs to CompareIt, side-by-side highlighting of differences.

Thanks Blue! I posted a link in the original thread so people can find their way here and benefit from the comments. And wow, that's a great way to use track_order, and it raises another question in my mind that's somewhat related to the premise of this whole thread. If I import second-level data using fetch csv, and run track_order by seconds using time library after tweaking it, can orders and fills get recorded by seconds? For live trading on Robinhood, I suppose track_order can get near real-time feedback from RH about order status. Do you see any problem with this approach? I am really paranoid about flash crashes and any system I intend to risk money on would have to stand the test of high sigma events. Literally learned my lesson on the first day of trading on Aug 24, 2015 when SPY tanked 3% within 10 seconds of market opening.