Optimize API Now Available in Algorithms

A few weeks ago, I posted a notebook presenting a new Optimize API. Today we're announcing that the Optimize API is available for use in algorithms, and we've added new features to make the API easier to use in the context of a running algorithm.

The basic idea of the Optimize API is that it relieves authors from the burden of having to manually size orders and positions. Instead of calculating individual share counts and placing orders directly, authors can now specify the desired state of their portfolio in terms of high-level objectives and constraints.

Since the original announcement, we've made a few tweaks to the built-in Objective and Constraint classes, but the biggest change to the API is the addition of a new top-level entrypoint, order_optimal_portfolio, which is available only in algorithms. order_optimal_portfolio accepts three parameters, all of which are required:

• objective, an Objective that the new portfolio should maximize or minimize.
• constraints, a list of Constraints that the new portfolio should adhere to.
• universe, an iterable of Equity objects to consider in the optimization. In idiomatic usage, this will usually be the index of a Pipeline result.

When called, order_optimal_portfolio calculates the set of portfolio weights that optimizes objective while still respecting constraints. It then subtracts the target portfolio weights from the current portfolio weights and places orders to move from the current portfolio state to the new optimal state.

### Examples

#### Backtest

The backtest attached to this post provides a complete example of how the Optimize API can be used in a realistic trading algorithm. The outline of the algorithm is as follows:

1. Once a month, choose a universe of 500 liquid assets.
2. Every day, build an alpha vector for our 500 assets. The alpha model we use is very simple: we rank assets by z-score of free cash flow yield and earning yield, both of which are fundamental value measures.
3. Once a week, calculate the portfolio that maximizes the alpha-weighted sum of our position sizes, subject to the following constraints:
• Our portfolio must have a maintain a gross leverage ratio of 1.0 or less.
• Our portfolio can have no more than 1.5% in any single name.
• Our portfolio must be equally exposed to long and short positions.
• Within each market sector, our portfolio must be equally-exposed to long and short positions.

#### Notebook

In the first comment below this post I've included an updated version of my original optimization notebook. It provides a more theoretical introduction to the idea of portfolio optimization, and it includes a reference for the built-in objectives and constraints.

### Next Steps

Now is a great time to try out the Optimize API and provide design feedback or ideas for improvements. The API is still marked as experimental, which means breaking changes are possible, but based on feedback from the previous announcement, I think it's unlikely that there will be major backwards-incompatible changes. Examples of ideas for improvements might include new objectives/constraints, a first-class notion of "penalties", or options for avoiding orders if the new portfolio isn't a significant improvement over the old one.

If you want to get a feel for the new API but don't know where to start, I'd recommend cloning the attached algorithm and tweaking some of the parameters. Many of the constants listed above can be easily changed by editing a line or two. How does increasing the leverage cap affect performance? How about the position size constraint? For a bit more of a challenge, try adding a new constraint. The algorithm currently places a few orders that don't fill by the end of the day. Would the algo improve if we constrained our position sizes to a fixed percentage of trailing volume?

If you're looking to dig deeper into the details of the Optimize API, I'd recommend opening up the notebook reading through the examples. If you have a specific idea you want to try, you can build your constraints and objectives interactively in research and copy them over to an algorithm when you're happy with them.

1171
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 584065213e96c7623b8c5b08
There was a runtime error.
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

84 responses

Here's the updated notebook.

310
Notebook previews are currently unavailable.

Hi Scott,

Is there any way to get the order IDs for the orders that were submitted?

Also, how are you doing the ordering? Are you just looping through the list of stocks, and applying their weights with order_target_percent or something else? For example:

for i, stock in enumerate(stocks):
order_target_percent(stock, weight[i])


Scott,

Wow. This introduces a whole new paradigm to the Quantopian platform. Great work.

I have the same questions as Grant. It would be nice to simply return the weights or the delta weights and not do the ordering too (though both have their uses). One issue I see with doing the ordering is closing/reducing positions may not occur before opening/increasing others. The intra-day leverage may get high. While not an issue in backtesting, in live trading those orders may be cancelled.

Hi Grant/Dan,

Good questions!

Is there any way to get the order IDs for the orders that were submitted?

order_optimal_portfolio doesn't have a return value right now, but that's probably an oversight. I think it would be sensible to make it return a Series mapping assets to order ids.

Also, how are you doing the ordering? Are you just looping through the list of stocks, and applying their weights with order_target_percent?

Pretty much. I have ideas for how this could be made smarter, but I wanted to start with the simplest possible implementation. Things I'm thinking about here:

• Adding some notion of a "minimum ordering threshold" so that an algo doesn't place orders if it won't materially change the portfolio. This could probably be specified either in terms of the optimization objective or in terms of the change in the portfolio. This might be unnecessary if we make it easier to penalize turnover in the optimization objective.
• Allowing more control over how the target weights are quantized back to share counts. order_target_percent just finds the share count such that the new position size is closest to target_percent * portfolio_value, but it's possible to get unlucky and have lots of orders round in the same direction, which could result in being over- or underexposed. I don't have a great sense yet of how likely this is to be a real issue (I'd love to see community research on this!), but there are fancier quantization algorithms we could use if it is.
• Allowing more control over the time scale on which orders are executed. I could imagine breaking up the delta into smaller orders and executing them throughout some specified period, for example. This might help with the issue Dan noted about intraday leverage if some order fill faster than others. This is probably longer-term idea, since it would probably require changes to how we model execution in the backtester.

It would be nice to simply return the weights or the delta weights and not do the ordering too

FWIW, you can still call calculate_optimal_portfolio directly for this, though you'd have to manually build the portfolio weights Series, which is a little tricky. One idea here might be to make the third argument to calculate_optimal_portfolio optional in the backtester, though I have mixed feelings about adding functions that behave differently in research vs backtesting.

Thanks Scott -

Some feedback:

• Maybe I'm confused, but there seems to be a kind of convolution of two steps on the workflow diagram. If I'm following, you've encapsulated the Portfolio Optimization, and Execution steps into order_optimal_portfolio (unless you are thinking of execution as another step, which would include an order/execution management system (O/EMS) at Quantopian and whatever goes on at the broker, once the orders are submitted). As I've gathered, the initial release of the workflow (supposing you are thinking of it as a complete system) will be geared toward daily and longer trading cycles using pipeline. So, it seems like I'd want to run the optimization in before_trading_start and then decide how to apply my portfolio update over the trading day (or multiple days), either using your order_optimal_portfolio or handing off the weights to the O/EMS. There is an optimal way of executing the portfolio update versus time (which is probably minute-by-minute), which is distinct from the daily (at most) optimal portfolio update (assuming only pipeline is used).
• Above, you say "you can still call calculate_optimal_portfolio directly for this, though you'd have to manually build the portfolio weights Series, which is a little tricky." I'm confused by this statement. Isn't this the whole point? If I can't use the optimization API to get out a dictionary of weights, keyed by the stocks in my current universe, then something is missing. What is tricky?
• Generally, I would think through how the API would be used in before_trading_start where one has access to a 5-minute compute window, versus the 50-second one if the API is used within the trading day. For example, what if you re-wrote your algo above to run do_portfolio_construction in before_trading_start, and then ran another optimization every minute (essentially an O/EMS) to manage executing the portfolio update over the trading day, starting at the open?
• As I mentioned on https://www.quantopian.com/posts/request-for-feedback-portfolio-optimization-api , if you look at the OLMAR paper as an example, there is a use case of computing the optimum portfolio over an expanding look-back window, and then combining the set of portfolios into one update (see Section 4.3). Along the same lines, there is discussion in Carver's book on optimization and the need for smoothing. Doing a single run of the optimization may not be the best practice, so if you can supply the weights as an output then it will allow for iterative computations of the optimization, with combination of the results.
• You say "Every day, build an alpha vector for our 500 assets" but then only run the optimization once per week. Why do a daily build of the alpha vector, if you are only going to use it once per week? Couldn't you speed up the backtesting by limiting all computations to once per week? And for that matter, if the idea is to execute the workflow using pipeline, then why not bring back the daily mode for backtesting, since I don't see the advantage of suffering through minutely backtests just to get a feel for performance versus time. Or am I missing something? As I describe above, one could compute the optimal portfolio vector in before_trading_start and then pass it on to an API that would approximate minutely trading.
• From your example above, it is not clear how one would use the optimization API with data from sources other than pipeline (and possibly in combination with pipeline data). What are the specifications on pipeline_data and todays_universe such that one could "roll your own" as inputs to the API?

Scott,
Thanks for this work! I am just starting to use it, so will delay the technical questions,
yet would like to ask if this module will be open sourced (e.g. scheduled for inclusion into Zipline),
or will it remain proprietary?
Again, thanks for your work on making optimization accessible!
alan

Hi Scott,

Another use case to consider is the application of a hedging instrument, versus an all-equity long-short portfolio. My hack function for this is provided below. I actually use a optimization routine to find context.a, which is a long-short vector of weights. If the vector doesn't sum to zero, then I add in an appropriate ETF (either long or short), under the assumption that it will be a perfect hedge.

The workflow doesn't explicitly discuss the use of hedging instruments. I guess it would come into play through the risk model and the portfolio construction steps?

def allocate(context, data):

try:
desired_port = context.a
except:
return
stocks = context.stocks + [sid(28350)]
for stock in context.portfolio.positions.keys():
if stock not in stocks:
order_target_percent(stock,0)
pct_ls = np.sum(desired_port)
record(pct_ls = pct_ls)
scale = 1.0-0.5*abs(pct_ls)
m = len(context.stocks) + 1

weight = np.zeros(m)
for i, stock in enumerate(context.stocks):
weight[i] = scale*context.leverage*desired_port[i]
weight[-1] = -0.5*context.leverage*pct_ls
for i, stock in enumerate(stocks):
order_target_percent(stock, weight[i])


Scott,
First, I agree that having order_optimal_portfolio return an asset-->order_id map would be good!

Second, I can't find a way to use order_optimal_portfolio and constrain it to an uneven split of the gross value of the longs and shorts.
The DollarNeutral() constraint only allows a 50-50 split(+/- some tolerance) of value, whereas, I want to try a 30-70 split( e.g., with $10M, I'd want to put$3M into shorts and $7M into longs). Any help appreciated! alan Hi All, I'm working on replies to a bunch of the questions here. Will post an update later tonight. • Scott Maybe I'm confused, but there seems to be a kind of convolution of two steps on the workflow diagram. If I'm following, you've encapsulated the Portfolio Optimization, and Execution steps into order_optimal_portfolio (unless you are thinking of execution as another step, which would include an order/execution management system (O/EMS) at Quantopian and whatever goes on at the broker, once the orders are submitted) The sentence in parentheses is more or less correct. I see the optimize API as falling squarely in the realm of portfolio optimization. Execution is a separate problem, downstream from the optimization process, that may eventually merit additional changes to how ordering is done on Quantopian (I could imagine, for example, allowing algos to specify a timeperiod over which an order or a rebalance should be executed). re: calling calculate_optimal_portfolio directly, the "tricky" piece I was referring to is building the Series of portfolio weights. A function for doing this from an algo would be something like this: import pandas as pd def get_current_portfolio_weights(context, data, universe): positions = context.portfolio.positions positions_index = pd.Index(positions) share_counts = pd.Series( index=positions_index, data=[positions[asset].amount for asset in positions] ) current_prices = data.current(positions_index, 'price') current_weights = share_counts * current_prices / context.portfolio.portfolio_value return current_weights.reindex(positions_index.union(universe), fill_value=0.0)  We should probably make something like the above available as a new API method to make it easier for users who still want to manual control over what happens after an optimization runs. From your example above, it is not clear how one would use the optimization API with data from sources other than pipeline (and possibly in combination with pipeline data). What are the specifications on pipeline_data and todays_universe such that one could "roll your own" as inputs to the API? pipeline_data is just a regular pandas DataFrame whose index contains Equity objects. Most Optimize API objects/methods take either pandas Serieses (generally mapping asset -> float) or pandas Index'es (generally containing Equity objects). The specific signatures are all pretty well documented in the notebook posted above. Second, I can't find a way to use order_optimal_portfolio and constrain it to an uneven split of the gross value of the longs and shorts. The DollarNeutral() constraint only allows a 50-50 split(+/- some tolerance) of value, whereas, I want to try a 30-70 split( e.g., with$10M, I'd want to put $3M into shorts and$7M into longs).

There are two ways I could think to do this:

1. If you know what your longs/shorts should be in advance, you could use a NeutralBasket constraint. NeutralBasket takes a set of longs, a set of shorts, and min/max net exposure to the basket. The idea behind the name NeutralBasket is that min_net_exposure and max_net_exposure are expected to be centered around zero, but there's no reason you couldn't center them around some other value to produce a net-long or net-short portfolio. I'm probably going to rename NeutralBasket to something more generic to make it clearer that this is a reasonable usage.

2. If you don't know what your longs/shorts should be in advance, you can use NetLeverage with a target net leverage. In your case, if you want to have a 70/30 split, you'd have a target net leverage of 0.4 * target_gross_leverage.

You say "Every day, build an alpha vector for our 500 assets" but then only run the optimization once per week. Why do a daily build of the alpha vector, if you are only going to use it once per week?

We could compute the alpha vecotr less often by throwing a call to .downsample('week_start') on the final alpha calculation, but doing so makes the pipeline more complex and wouldn't have a material effect on the runtime of the algorithm. If we were doing something fancier in the alpha combination or alpha generation phases (e.g. running a machine learning model), then intelligent downsampling would be worth considering.

Thanks Scott -

Any thoughts on how to do something like this:

1. Pick a sample trailing window of data.
2. For the sample window, get the optimal portfolio vector and the expected return if the portfolio were changed.
3. Store the optimal portfolio vector and its expected return.
4. Loop back to step 1, picking a new sample trailing window.
5. After N samples, use the results in combination to find the overall optimal portfolio vector.
6. Tweak the overall optimal portfolio vector, with an additional set of constraints to minimize unnecessary turnover, trading costs, etc.
7. Submit orders.

Is something like this feasible? If you could kinda point the way, I think I'd be able to work up an example of the long-only OLMAR algo, using your API (I have a long-short version, too, which I could share with you privately).

Hey Scott, good work! I hope you can answer my (more mathematical/technical) questions.

1. Which kind of objective-function is feasible (basically: what algorithm is used for the optimization)?
2. What happens if no solution can be found (divergence, maximal iterations, etc)?
3. Are you planing to add optional arguments? Like a gradient-function: My experience is, that an analytical gradient outperforms any numerical scheme by far. Or the type of objective, to choose the best algorithm.

Hi Scot -

I'd posted some comments to https://www.quantopian.com/posts/machine-learning-on-quantopian-part-3-building-an-algorithm , but I guess they weren't germane enough to the ML discussion, and got deleted by a Quantopian moderator. Perhaps you can address them here:

Well, the optimization has a market_neutral constraint. How does it
work? Is it fancy-dancy, doing correlation analyses w.r.t. SPY, and
then projecting beta, and adjusting the weights accordingly? Or is it
something simplistic, like fixing the sum of the signed weights to
zero?

And how do the constraints interact? Will the optimization fail if a
constraint can't be met exactly, or is there some wiggle room?

Also, how would a hedging instrument (or instruments) be applied? It
seems like this is a gap in the framework, no?

Hi Scot,

I'd like to repeat Alan Coppolas question from 3 weeks ago:
Are you planning to include this module into Zipline anytime soon?

chonro

Hi All,

Sorry for not being as responsive as I'd like to be here. A few quick replies:

"What kinds of constraints/objectives are allowed?" - Currently, most of the heavy lifting in the optimize api is done by CVXPY, which is a library for doing convex programming. This page provides a nice overview of the kinds of problems solvable within this framework. The examples and functions pages are also useful here.

"Is the optimize API being open-sourced?" - We'd definitely like to make the code underlying the optimize API open source. The major issue with doing so is that most of the lower-level building blocks of the optimize code are licensed under the GPL, which is incompatible with the Apache License under which we distribute Zipline and the rest of the Quantopian open source portfolio. What that means, as a practical matter, is that we probably can't include the optimize code in Zipline itself; we'd have to distribute it as a standalone library and provide some sort of optional plugin interface for it in Zipline. That's all technically feasible, but splitting up the sources would be an extra source of friction that we don't want to add to the development process right now.

"How do multiple constraints interact? What happens if constraints can't be satisfied." - All constraints specified in an optimization have to be satisfied. If they can't, the optimization will fail with an InfeasibleConstraints errors. It's also possible for the optimization to fail if the constraints fail to place an upper/lower bound on the objective (imagine, for example, trying to maximize f(x) = x ** 2 with only the constraint being x >= 0), in which case the optimization will fail with an UnboundedObjective error.

"How does a market neutral constraint work?" - The built-in DollarNeutral constraint just enforces that the net amount of capital allocated to long and short positions is within some tolerance from zero. You could build a beta-weighted market neutral constraint by calculating your own betas and passing those to a WeightedExposure constraint. This would be a nice example post for the community if anyone is interested in trying it out.

Grant, w/r/t your example algo outline, I won't have time in the near future to write up a full example. I'd suggest posting your outline to the community and see if anyone is interested in trying to collaborate on the problem.

Wondering is there is a possibility of replacing CVXPY as any algo we write using the optimisation API will automatically become GPL and we have to open source if we were to distribute it or even share it privately.

Hi all,

I'm not sure this thread is the best place to ask but as it deals with optimization I think it's worth to ask here...

A walk forward optimization framework could be a nice feature to have.

What is walk forward optimization ?
https://en.wikipedia.org/wiki/Walk_forward_optimization
https://www.amibroker.com/guide/h_walkforward.html

An other Quantopian user was looking for such a feature : see https://www.quantopian.com/posts/walk-forward-optimization-is-there-plan-in-the-work

Several years ago, Thomas Wiecki wrote a nice article in a blog post about that
https://blog.quantopian.com/parameter-optimization/

What is current status ?

@ Scott -

Regarding the question of how to handle failures of the optimization routine, how are you planning to approach this? In the case of https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize , there is a success flag along with a message. So, if for some reason the optimization does not converge (but runs) it can be handled. How are you planning to deal with the case of no convergence? There are errors that would cause the algo to crash, such as setting up the optimization improperly; presumably your InfeasibleConstraints and UnboundedObjective conditions fit into this category (and could be managed with try-except). What if the problem is formulated correctly, but a solution just can't be found (within N iterations, a specified time, or a specified tolerance)? Will there be a success flag? Or are you expecting try-except to be used? And if the latter, will the error code be available within the algo?

Grant, w/r/t your example algo outline, I won't have time in the near future to write up a full example. I'd suggest posting your outline to the community and see if anyone is interested in trying to collaborate on the problem.

No problem. I'm not clear yet if we will be able to get out the optimum portfolio weight vector, since it is embedded in your order_optimal_portfolio. Perhaps you or someone else could show me how to use the optimization API at a lower level, to get the portfolio weight vector without placing orders.

@Scott,
Fortunately, looks like the GPL issue for CVXPY won't be an issue much longer. See:
https://github.com/cvxgrp/cvxpy/issues/313

While the functionality is nice, and the package works fairly well,
I personally can't afford using it or looking at it anymore until it is either released open source
or is a commercial product that is fully documented, categorized, and supported.
It just takes too much of my time to infer things about black box code.
alan

OK Great! Hope the switch will be fast!

Hi Scott -

I've started to play around with this new API. I noticed this:

import quantopian.algorithm as algo


What is this? Why do you use it?

Hi Scott,

The attached code fails, when I add the constraint:

MAX_TURNOVER = 0.75

constrain_turnover = opt.MaxTurnover(MAX_TURNOVER)


It fails part-way into the backtest with:

Something went wrong. Sorry for the inconvenience. Try using the built-in debugger to analyze your code. If you would like help, send us an email.
SolverError: Solver 'ECOS' failed. Try another solver.
There was a runtime error on line 130.

Any idea why?

333
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 589c42f02bc4a76278035182
There was a runtime error.

Can anyone give an example of how you would use calculate_optimal_portfolio() as opposed to order_optimal_portfolio() in an algorithm?

I can't seem to figure out what it needs me to do to correctly pass it a current_portfolio as a pd.Series

It'd be nice to be able to do something like this:

target_weights = order_optimal_portfolio(
objective=objective,
constraints=[
constrain_gross_leverage,
constrain_pos_size,
market_neutral,
sector_neutral,
constrain_turnover
],
universe=todays_universe,
order = False,
)


It wouldn't actually submit the orders, but instead just spit out the target weights that would be applied via order_target_percent as a dict or some Pandas thingy.

Also, will the API eventually make it to the open-source zipline or otherwise?

Note that eventually, the optimization API may be open-sourced, per Scott's comments and the github issue:

It is snagged on a licensing issue.

It looks like the CVXPY licensing issue was resolved yesterday. Yay! I'm looking forward to seeing the code for this...

wonder do we a timeline on when will this be open sourced pls ?

I got the turnover constraint to work with a try-except:

    try:
order_optimal_portfolio(
objective=objective,
constraints=[
constrain_gross_leverage,
constrain_pos_size,
market_neutral,
sector_neutral,
constrain_turnover
],
universe=todays_universe,
)
except:
return


How can I unravel why this is necessary? Do I have some constraints that can sometimes "collide" depending on the input set of alphas?

333
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 589d9358b644cf6167b58022
There was a runtime error.

@ Luke

Attached is an algorithm which uses the 'calculate_optimal_portfolio()' method as opposed to 'order_optimal_portfolio() '. I need to use that for better control of the actual ordering for live trading in Robinhood.

To get the required current actual portfolio weights I did this:

    for security, position in context.portfolio.positions.items():
output_df.set_value(security, 'actual_weight', position.amount * position.last_sale_price / context.portfolio.portfolio_value)


The actual weights end up as a column in a dataframe (indexed by the security). These weights are then passed to the 'calculate_optimal_portfolio' method as below

    context.output.adj_weight = opt.calculate_optimal_portfolio(objective, constraints, context.output.actual_weight)



Look towards the end of the algorithm in the function called 'adjust_weights' for the actual optimization code.

97
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 589db88ee0be2e62712c5152
There was a runtime error.

@ Grant. I have a similar (perhaps the same) problem with the 'MaxTurnover' constraint. It generates an 'InfeasableConstraint' error and crashes unless I wrap it in a 'try..except' statement as you did (though I am using the 'calculate_optimal_portfolio' method). It seems to not like the initial portfolio state of zero shares in any position. Maybe it considers going from nothing to anything a violation of the constraint?

@ Dan -

Yeah...mysterious. It does work, though, but apparently will cause CVXPY to barf under certain circumstances.

Here's another example of using calculate_optimal_portfolio. Seems to work fine, but it does require construction of the current portfolio weights:

    port = context.stocks + list(set(context.portfolio.positions.keys()) - set(context.stocks))
w = np.zeros(len(port))
for i,stock in enumerate(port):
w[i] = context.portfolio.positions[stock].amount*data.current(stock,'price')
denom = np.sum(np.absolute(w))
if denom > 0:
w = w/denom
current_portfolio = pd.Series(w,index=port)


Not so pretty, but it works.

333
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 589edcac2415ec615b116e9a
There was a runtime error.

Grant, I've tried your last version posted above. It is a strategy that would comply for what Q is seeking: low beta, low volatility, low drawdown. Since Q has said they might like to leverage something like that, I went for “MAX_GROSS_LEVERAGE = 4.0”. The program did not terminate above that.

With the modification, recorded max leverage goes up to about 1.5.

So, the question: what would force your program to reach a max leverage of 6?

It would give a view of what a 6 times leveraged low drawdown portfolio would look like over the 12-year test.

Based on a preceding version, I had the leverage go up to 4.5 but performance degraded, and this without counting interest charges.

Guy -

Yeah, it does have the appearance of the kind of thing the Q fund team might be interested in--basically a low-return, SR >= 1, low-risk strategy that can be leveraged to 6X (in my mind, not unlike a bank CD or money-market fund, but I guess there the cost of leverage is too high, relative to the return). The optimization API has the effect of getting rid of the nasty drawdowns and volatility that would throw a monkey wrench into the whole idea. The basic idea is that one can keep the leverage peddle to the metal, so long as the risk is managed.

It is weird that you were not able to crank up the leverage. It should just end up being a multiplier: order_target_percent(stock,leverage*weight[stock]).

The optimize method has sort of mind of it's own (at least from what I've experienced). The results aren't always what would be initially expected. A good case in point is Guy's observation that, even when setting the “MAX_GROSS_LEVERAGE = 4.0” that the actual leverage never goes above 1.5.

The optimize algorithm is simply doing it's thing and determines that a solution to the objective (maximize the alpha factor) does not require the leverage to be above 1.5. The 'MaxGrossLeverage' constraint only ensures that the leverage won't exceed that amount and not that it will equal that amount.

Anyway, it's illuminating that it might be 'optimal' to only use 1.5 leverage (at least as far as the optimizer is concerned). A simplistic approach to getting an overall leverage of 3.0 is then to set the “MAX_GROSS_LEVERAGE = 1.5". Then go down to the 'order' function and simply multiply all the weights by 2. This results in a net leverage of 3.0. (there are a few times it inches above 3.0 but that could easily be fixed with better order logic and probably doesn't impact the overall results much). Not pretty but it shows what a leveraged algorithm would do.

 order_target_percent(stock,weight*2.0)



Attached is a backtest. Nice results, especially the way it sails through the 2008 downturn. Kudos to you Grant.

28
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58a0805997378f5de62c2d8f
There was a runtime error.

Nice results, especially the way it sails through the 2008 downturn

There could be "over-fitting" here by a so-called quant. If you try the code below, the 2008 downturn behavior is different. One can kinda tune the "Great Recession" response by flipping the sign of d for extreme values of x_tilde. I have no firm basis for this, although in my hobbyist quant mind, the pure mean-reversion factor becomes more of a combined mean-reversion/momentum factor with this change. Presumably, one could write separate mean-reversion and momentum factors and then combine their alphas, and then run the combined alpha through the optimization API.

def mean_rev(context,data,prices):
m = len(context.stocks)
d = np.ones(m)
x_tilde = np.mean(prices,axis=0)/prices[-1,:]
y_tilde = 1.0/x_tilde
d[x_tilde < 1] = -1
# d[x_tilde < 0.9] = 1
# d[x_tilde > 1.1] = -1
x_tilde[x_tilde < 1] = 0
y_tilde[x_tilde != 0] = 0
x_tilde = x_tilde + y_tilde

return (d*x_tilde, np.sum(x_tilde)/m)


Grant, I use the version provided by Dan. Used “MAX_GROSS_LEVERAGE = 2.0”, and “weight*3.0”. By my calculations, even after having paid interest charges of 3% on 5/6 of A(t), the strategy would be left with a 15.86% CAGR for the 12.12 year test period.

Not bad. There is alpha in there, even at 6 times leverage.

Your mean_rev might be what protected you during the financial crisis. I would leave it there as a protection for similar future events. And study how it did its job to see if it was responsible for dampening price variations during that period.

Grant, by making some slight changes to your basic design, you could increase overall performance by being more lenient on some of the parameters.

I went for the following:

Constraint Parameters
MAX_GROSS_LEVERAGE = 2.0 #was 1.0
MAX_SHORT_POSITION_SIZE = 0.03 #was 0.015
MAX_LONG_POSITION_SIZE = 0.035 #was 0.015
MAX_TURNOVER = 1.50 #was 0.75

I wanted to be biased to the upside, therefore more weight. Also, possibly a higher concentration on best performers. And allowed more trading activity by easing the turnover restriction.

With these numbers, and the “weight*3.0” to get 6 times leverage, I obtained a 21.94% CAGR. And this after having paid the leverage charges which totaled about $6.2 million. Just by changing initial assumptions, you can increase overall performance. A 21.94% CAGR is an excellent performance level. Grant, sorry, forgot to attach the backtest. 85 Loading... Backtest from to with initial capital Total Returns -- Alpha -- Beta -- Sharpe -- Sortino -- Max Drawdown -- Benchmark Returns -- Volatility --  Returns 1 Month 3 Month 6 Month 12 Month  Alpha 1 Month 3 Month 6 Month 12 Month  Beta 1 Month 3 Month 6 Month 12 Month  Sharpe 1 Month 3 Month 6 Month 12 Month  Sortino 1 Month 3 Month 6 Month 12 Month  Volatility 1 Month 3 Month 6 Month 12 Month  Max Drawdown 1 Month 3 Month 6 Month 12 Month # Backtest ID: 58a0a4efe3a2855e06118ca8 There was a runtime error. Guy - As far as this discussion thread goes, I'd say the question is how to determine the optimization API settings? Lots of bells-and-whistles, so perhaps some rules-of-thumb could be published? For your typical long-short equity algo, what is the recommended configuration of the optimization API? Maybe in comes down to running backtests through pyfolio, and then looking at the guidance on https://www.quantopian.com/allocation and making adjustments until it it conforming? Even then, there's probably too big of a parameter space to just make adjustments by fiddling by-hand. Grant, even there, you want some preliminary indication that the task might be worth the effort. Your initial settings did flatten out the equity curve, even during the financial crisis. However, the price was high in the sense that it underperformed averages. You want to know what are the limits of your trading strategies. A 100,000 Monte Carlo test on your trading script could take quite a while. When all you want to know are approximations since the future will be different anyway. However, your program will continue to act as it did in the past. And not having an inkling of where the limits are could be disastrous since your program might not be able to handle it pass those limits. Also, whatever optimization method you want to use, what you feed it will be paramount. Grant, the Pyfolio Bayesian tear sheet produced the following: http://alphapowertrading.com/quantopian/Optimize_API_dist.png Setting used: bt.create_returns_tear_sheet(live_start_date='2014-1-1') What the above chart indicates is that going forward, the methodology is breaking down. Code modifications would be required to change that picture. It is on my list to use the alphalens thingamabob to analyze the factor. There's a bit of a learning curve, since all of the examples are pipeline-based, as far as I know. The optimization won't do diddly if there is no "alpha" in the factor to begin with. Grant, yes. Note that the original design did not generate alpha. It was by changing assumptions and leveraging that the picture changed. And still, we can see portfolio metric deterioration. Couple of other things to keep in mind: Consider spending the available$1M in cash, it is only utilized a little bit. Here's some info on the two most recent algos above. They take a long time to run so this applies just to Jan 2013 where the benchmark is around 39%, using the PvR tool:
The DW version profited 1,409,033 on 3,723,288 activated/transacted for PvR of 37.8%.
The GF version profited 10,122,169 on 37,877,149 activated/transacted for PvR of only 26.7%.

That last one is apparently shorting nearly 38 million. I'm not sure how margin works at IB for example and I guess one has to have a margin account to do that kind of shorting.
Slippage and commissions are turned off in both so brace for worse.

When looking at a backtest chart, a good way to think of it is that those returns might be possible, only, as long as practical real-world limitations are observed, and I've seen times where reigning in margin wound up with real profits actually higher than previously mere apparent profits so hang in there.

Blue, yes. There is a big difference when you consider with and without frictional costs. I pushed this trading strategy to new heights, not by optimizing the optimizer, but by changing its initial parameters. Changing its trading universe.

Without frictional costs, I obtained:

That is a 28.69% CAGR over the 12.12 years. Including interest charges, that return would drop to 24.06%. Still quite interesting.

And with frictional costs, the picture changed considerably.

Here we see the impact of slippage and commissions using Quantopian's defaults values. The CAGR dropped to 14.65%. And if we included interest charges. It would drop further to: 10.52%! Getting pretty close to the average secular trend. In a way showing there is no free lunch...

By the very nature of this trading strategy, I find it understandable since it is catching the fuzzy statistical drift. And by which, over the years, will tend to the secular trend when considering all expenses.

Note that the picture would be worse if not leveraged 6 times.

Is there a broker that willl let you use their ~\$40M without putting up a couple mansions as collateral?
Note the leverage was actually 8, not 6. The code was set to do 6, it had other ideas during the day, maybe the most common user mistake. Explanation.

How does one provide a quadratic objective function to quantopian's optimize API?

Hi Scott -

I'd like to be able to call the optimization API and get the target weights out without automatically placing orders. Is this feasible? Something tidy like this would be handy:

weights = run_optimization()


This would allow computing a set of weights over an expanding trailing window, for example, so that they could then be smoothed, prior to placing orders.

Below is the code I sorted out, in response to my question immediately above (based on Dan Whitnable's code above):

    actual_weight = pd.Series(index=context.stocks)
for security, position in context.portfolio.positions.items():
if context.portfolio.portfolio_value != 0:
actual_weight.set_value(security, position.amount * position.last_sale_price / context.portfolio.portfolio_value)
else:
actual_weight.set_value(security, 0)

objective=objective,
constraints=[
constrain_gross_leverage,
constrain_pos_size,
market_neutral,
sector_neutral
],
current_portfolio=actual_weight
)
if denom > 0:
for stock in context.portfolio.positions.keys():
if stock not in context.stocks:
order_target_percent(stock,0)
for stock in context.stocks:


Hi Q,

Can someone help me answer these questions please. It is quite urgent for me.

1. Is it possible to provide a quadratic function to the objective?

2. Can I create my own constraints?

Many thanks
Pravin

Hi All,

Sorry for the radio silence on this thread. I've been pretty head down working on internal infrastructure projects recently. A few quick replies to some of the recent questions:

I'd like to be able to call the optimization API and get the target weights out without automatically placing orders. Is this feasible?

Agreed that this would be useful. One catch is that, as of a recent update, the optimize API is accounting for unfilled open market orders when it calculates new targets. This has been a longstanding feature request for the order_target_* family of order methods, but it's hard to change those without breaking backwards compatibility. What this means is that if a hypothetical calculate_optimal_portfolio function simply returned the new targets, you'd have to re-apply the same adjustment for open orders that order_optimal_portfolio is currently doing. I'm not sure off the top of my head how to solve that issue elegantly, but the challenge here is primarily one of design, not of implementation.

Is it possible to provide a quadratic function to the objective?
Can I create my own constraints?

We don't currently have support for custom objectives or constraints. There's still a lot of churn happening in the implementation, and I don't want to commit to exposing implementation details (which would be necessary to allow these kinds of customizations) until there's some degree of confidence that those details are stable. If you have suggestions for particular new objectives and constraints that can't currently be expressed with the built-ins (the most obvious omission right now is a volatility-minimizing objective), I'd be curious to hear about them.

Thanks Scott -

I think calculate_optimal_portfolio does the trick, giving the new optimal weights. Presumably, it ignores any open orders.

At present, in my fumbling around, I'm only placing, at most, one set of orders per day. Since all open orders are cancelled at the end of the day, no problemo.

The general use case here is to be able to smooth over either an expanding trailing window, or to store the weights in context and then smooth.

We don't currently have support for custom objectives or constraints.

As I understand, Quantopian will use optimize API for all their fund algorithms and internally to route orders to their trading desk. This practically eliminates all algorithms that do not "fit" into optimize API paradigm.

I'm not sure how to express it, but from a design standpoint, if the optimize API could be sort of an add-on/extension/cross-compatible software thingy with CVXPY, it would be nice. As I understand, CVXPY is the underlying engine of the optimize API, so if one could mix-and-match the optimize API with CVXPY, it would be nice. I guess that the optimize API will eventually be in github/zipline, and then folks could "roll their own" but maybe something could be done at the outset. For example, custom objectives and constraints could be expressed in a CVXPY-like way (and then perhaps put through an interpreter thingy that would integrate them into the optimize API). Sorry, I'm not a software engineer; hopefully my point is coming across.

Hi Scott -

Note that I'm still seeing the crash if a turnover constraint is added, so a try-except must be used (see code I just posted on https://www.quantopian.com/posts/long-short-market-neutral-mean-reversion ).

Also, it would be nice to be able to include a hedging instrument (or basket of hedging instruments) in the optimization (e.g. SPY), as a means to keep beta close to zero.

[EDIT: Sorry for repeating myself--I see above that I'd already mentioned including hedging instruments. There was no response, so I'm still wondering if it might be a useful addition.]

Hi Scott,

Here a few things I had like to see in Optimize API:

1. Ability to minimize covariance of a portfolio.
2. Ability to neutralize (set = 0) systematic factors that I calculate (Instead of simply dollar neutral; I want to make portfolio beta neutral for some betas).

Best regards,
Pravin

Hi Scott -

I'm still "tire-kicking" this API, and noticed the log output:

2005-01-12 09:45 WARN optimizer.py:204: InaccurateOptimization: Optimization completed with status OPTIMAL_INACCURATE.
This usually means that the target for TargetPortfolioWeights is far away from satisfying all constraints.

Here's the block of code I'm using:

    try:
objective=objective,
constraints=[
constrain_gross_leverage,
constrain_pos_size,
constrain_turnover,
market_neutral,
sector_neutral
],
current_portfolio=actual_weight
)
except:


It seems that there needs to be some way to get out a success/status flag from the optimizer (e.g. see https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.OptimizeResult.html#scipy.optimize.OptimizeResult).

A Markowitz scheme does not look possible here given that volatility is not listed amongst the "constrains". I suspect it might be a useful inclusion although the resources required for the relevant matrices might be too high? There is an adaptation of the Critical Line Algo on Q but I gave up trying to decode the code!

@ Anthony -

Note that CVXPY is now whitelisted on Q. It should make stuff like https://blog.quantopian.com/markowitz-portfolio-optimization-2/ easier. I've used it a bit, and it is fairly user-friendly and well-documented.

Ah brilliant thanks. I must say leaving aside my bigoted distrust of all backtesting, I do admire the job these people have done.

The problem is that if you algorithm gets selected into the fund, you are supposed to use only optimize API and cannot use CVXOPT or CVXPY. This is very limiting in my opinion. I have all sorts of complex optimization routines that do not fit into the optimize API paradigm.

I think it is a matter of scope and use cases for Quantopian--how much engineering effort do they want to put into it, and to what end? Once the code is released to github (presumably the path here), home-grown approaches that build off of it should be easier.

As far as it being a requirement for fund algos to use the optimize API, it would behoove Quantopian to clarify this point in a style guide or something. It is implied, though, that they'd like to see something like it for risk management at the algo level.

Scott, I'm just implementing my algo via the Optimizer. I'm telling it what weights to use, via opt.TargetPortfolioWeights(weights), as per Jeremy's post here. I infer from his code, and your "tricky" post above, you have to be careful when exiting positions. They must be in the universe and the weights passed to the optimizer, with a weight of zero. I assume if they are not, the call to order_optimal_portfolio just skips over anything it's not told about, and doesn't exit any positions. This is a bit of a faff. I wonder if better to make the default behaviour exit all positions not included in the optimized universe.

Hi Burrito Dan -

Here's an example I cooked up. I use:

adj_weight = opt.calculate_optimal_portfolio(
objective=objective,
constraints=[
constrain_gross_leverage,
constrain_pos_size,
market_neutral,
sector_neutral
],
current_portfolio=actual_weight
)


You have to give it current_portfolio and then it runs the optimization, and returns the weights. Then, when ordering, you simply exit any positions that are no longer in your current universe.

Personally, I think order_optimal_portfolio tries to do too much, since it does not report out the weights, and the ordering is all under the hood.

92
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58e8e8ba94f35a6532f242c7
There was a runtime error.

@Dan any positions in your current universe are implicitly unioned into your current universe and given targets (or alpha values in the case of MaximizeAlpha) of 0.0. If you call order_optimal_portfolio with a sequence of calls like:

TargetPortfolioWeights({AAPL: 0.5, MSFT: 0.5})
TargetPortfolioWeights({AAPL: 0.5, TSLA: 0.5})


then on the first call you'll see buys for 50% of AAPL and 50% of MSFT, and on the second call you'll see a buy for 50% of TSLA and a sell of the MSFT position.

I've attached a short backtest that demonstrates this behavior.

26
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
 Returns 1 Month 3 Month 6 Month 12 Month
 Alpha 1 Month 3 Month 6 Month 12 Month
 Beta 1 Month 3 Month 6 Month 12 Month
 Sharpe 1 Month 3 Month 6 Month 12 Month
 Sortino 1 Month 3 Month 6 Month 12 Month
 Volatility 1 Month 3 Month 6 Month 12 Month
 Max Drawdown 1 Month 3 Month 6 Month 12 Month
# Backtest ID: 58f621c30feda961a624b8ab
There was a runtime error.

Over time I've become less convinced of the usefulness of the universe parameter to order_optimal_portfolio. My initial reasoning for including it was that I wasn't sure we could always determine a correct optimization universe automatically, and in the face of ambiguity, refuse the temptation to guess.

Having worked with the Optimize API a fair amount now, I'm inclined to think that the "correct" universe is almost always given by:

set(current_nonempty_positions).union(assets_referenced_by_objective)


In light of that fact, my current thinking is that universe should either be optional or we should allow passing a special sentinel value (e.g. quantopian.experimental.optimize.INFER_UNIVERSE), which would produce the behavior noted above.

@Pravin

Here a few things I had like to see in Optimize API:
Ability to minimize covariance of a portfolio.

Yup. This is in the roadmap. One interesting question is whether MinimizeVariance should be its own objective, or whether it makes more sense to incorporate a variance penalty into MaximizeAlpha (that starts to get challenging because you have to figure out how to make the units comparable between your measure of alpha and your measure of volatility).

Ability to neutralize (set = 0) systematic factors that I calculate (Instead of simply dollar neutral; I want to make portfolio beta neutral for some betas).

You can actually do this already using the opt.WeightedExposure constraint, which takes a DataFrame of risk factor weights (one column per factor, one row per asset) and Series' of min/max net exposures to each factor.

@Scott,

Thanks for the feedback. I have another issue. My algorithm runs an optimization routine per sector and then trades each sector separately. How can I combine this in one optimization routine? Basically I want the maximizeAlpha to work independently per sector. That is, if it is trading 4 sectors and capital is 1 million, it will allocate 250K to each sector and maximizeAlpha per sector.

Best regards,
Pravin

Is it possible to give an example how opt.WeightedExposure works?

@Pravin

When you say you want to allocate 250K to each sector, is that gross or net exposure? If it's net (which would presumably mean your optimization is long-only or short-only within each sector?), you could use the existing NetPartitionExposure constraint to constrain your exposure to individual sectors. Assuming each asset only appears in a single sector once, maximizing alpha globally with a sector exposure constraint should be equivalent to maximizing alpha locally within each sector and then merging the results. (This isn't necessarily true if you have other constraints in play, but in that case you actually probably want the global maximum.)

I don't think we have a good way to constrain gross exposure on a per-sector basis right now. That's probably another good constraint to add. Note that due to convexity considerations, we'd probably only be able to support an upper bound on per-sector gross exposure (this is the same as the behavior of MaxGrossLeverage currently).

Hi Scott -

A couple questions:

1. What's the story behind import quantopian.algorithm as algo? Is there any documentation you could point us to, describing what it is and how to use it?
2. Could you elaborate on how this optimization API effort fits with the Q fund? Is the idea that you'd encourage/require the use of order_optimal_portfolio with certain constraints? Also, it seems that folks trading their own money may not want to use order_optimal_portfolio since there is no control over the ordering (e.g. presumably, you are submitting market orders, correct?). Are you wanting order_optimal_portfolio to be universal, or more focused on the Q fund?

Hi Scott -

Just following up--when you get the chance, I think it would help to layout the various use cases you are addressing here. For example, if the intent is for order_optimal_portfolio to cover all use cases (e.g. backtesting, Q paper trading, IB, Robinhood, Q fund, etc.) then we need to discuss how users will configure the order type(s), manage orders, etc. Are you planning to provide user "hooks" into the ordering functionality within order_optimal_portfolio?

Could constraints be used to limit the number of securities traded? Factors tend to lose their predictive power when moving towards the center of the universe. What type of constraint could be used to limit to 100 positions long and 100 positions short in a universe of 600? An example would be greatly appreciated.

@Scott, I missed the MaxTurnover part of the Notebook for ages. Please can you add it to the quantopian.com/help

One "gotcha" is that your existing positions are brought into the optimizer, and will use the default parameters for position concentration.

This applies the same default to new and existing positions:

PositionConcentration.with_equal_bounds(min, max)


This will apply min/max_weights if the existing position is listed, otherwise default_min/max_weight will be used.

PositionConcentration(min_weights, max_weights, default_min_weight=0.0, default_max_weight=0.0)



Can I set constraints at individual stock level.

If I want to set boundaries of stock position (not sector) to a range like [-0.1%,0.1%] but customized per stock that was ranked and passed to the optimizer, is there a way to use constraints to achieve that in some way while still doing all the other things that order_optimal_portfolio does?

HI All, I am having a play with this algo at the moment and would like to implement a filter on the universe based on get_fundementals() which I have put in before_trading_start() .. I am very new to this and I am not sure this has any effect .. anyone got any pointers / examples ?
i.e.