Contest 32 Rules Changes - Commission and Leverage

We are making two changes to the contest rules to help align the contest with the allocation process.

1. All entries will use a commission model of $0.001/share, with no minimum trade cost - backtest and live trading. 2. Leverage will be capped at 1X, down from 3X in previous contests. These changes will take effect in Contest 32, which has a submission deadline of August 1, 9:30AM Eastern. Previous entries will not roll over into this new contest. Since this is a clean start, everyone can make 3 new entries this month, regardless of how many you already have entered. If you've made a new entry already this month, we will send you an email and ask you to re-submit your entry Why these changes? We've been making multi-million dollar allocations to community members for more than 3 months, and we are always learning and adapting. We have several contest changes in the planning stages, and these are the first two changes to be ready. The$.001 commission is closer to what our clients pay for their trades. That lower commission cost enables a slew of strategies that were prohibitively expensive to trade using the default commission. It's easier to capture the profits from a weak alpha when you can turn over your portfolio inexpensively. We want you to research and develop algorithms that work under this type of commission model so that you have a better chance to win the contest or to receive an allocation. (A future contest change will update the slippage model to more accurately reflect the slippage that we're getting with our state-of-the-art trading platform.)

The leverage change is more prosaic: when we evaluate strategies for an allocation, we want to make it easier to compare apples to apples. Many of the risk measurements are already leverage-adjusted (like Sharpe ratio), but other considerations, like slippage, are easier to evaluate when there is a consistent 1X leverage across all algorithms.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

45 responses

All entries will use a commission model of $0.001/share, with no minimum trade cost - backtest and live trading. This is awesome. Does this mean long/short strategies must be de-leveraged (e.g. 50% NAV long 50% NAV short) to keep total gross leverage under 1x? Does this commission model currently replace the backtesting environment's default commission model? @Norbert: Yes, that's correct. Total gross leverage is capped at 1x. @Cheng: The default in the backtester is currently still the IB default. However, when you submit to the contest, the$0.001/share model will be used. If you would like to use this model in your backtests, you should include the following line in initialize:

set_commission(commission.PerShare(cost=0.001, min_trade_cost=0))

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

May I suggest that minimum commissions and slippage be considered. Whatever model you want to use, it should be representative of real life situations. Something such as IB's commissions schedule with 0.005 per share with a $1 minimum cost per trade. A lot of trading strategies generate an awful lot of trades below 100 shares. And putting a 0.001 commission per share is saying you can get in a trade for less than 10 cents. If you send an order for 10 shares, the cost will be a penny. Not realistic. Any trading strategy needs to survive in the real world. By minimizing the impact of frictional costs, we are only deceiving ourselves. And then, some wonder why their trading strategies are breaking down going forward. We should not asked that question if we already positively pamper our test results by undercharging what it really cost to do business. How rigid (strict) is the 1 X leverage constraint? If an algo every now and then reaches 1.05 or 1.1 instead, will it be disqualified? I'm also curious about the leverage constraint. It would be really nice if the leverage limit were simply enforced on the back-end (simulating a broker) so we wouldn't all have to reinvent the wheel with each algorithm on how to control leverage, especially since the order methods we have access to on the front end are fairly clunky when it comes to leverage control. Obviously shorts complicate this situation, but the argument still stands, I think: why not just solve the problem once? What about intraday leverage spikes? If we place buy and sell orders at the same time, sometimes they buy orders will fill before the sell orders, so even if we're down below 1.0 leverage before close, we may have exceeded leverage at some point during the day. I think the solution to all these questions is to give use init functions like these: set_maximum_leverage(1.0) set_maximum_intraday_leverage(2.0)  It would be great if Q would simple block orders that exceed our specified intraday leverage, and automatically liquidate assets automatically for us if we exceed intraday leverage through short positions or at the end of the day. So that's my feature request... Either that or some new functions that help abstract the order process in a way that takes leverage into account. (Something like a function that takes an array of longs, an array of shorts, and corresponding weights, and processes the orders as cash is freed up. ) I realize that we can all write this ourselves, but if every single person has to write the same code, why not just make it a part of the platform? But ultimately, my big question in regards to contest 32 is, do we need to keep intraday leverage under 1.0 as well? This would mean needing to be very careful about executing sells before buys. +1 on Tim Vidmar's comment. Since you require short exposure, being extremely rigid with 1x leverage creates a situation where we have to leave headroom, and would generally be underinvested to accommodate the limit. Perhaps a trailing average leverage would be a reasonable way to sort this out. As a followup to my last post. I have shown in the following thread: https://www.quantopian.com/posts/alpha-vertex-precog-dataset that commissions and slippage certainly matters in a trading environment. See tests numbered: 23, 24 and 25. As for leveraging, the solution is not limiting leveraging to 1x, but charging for it at the going rate at whatever level one wants to go. This way when we backtest we would see if a strategy survived its leveraging costs. We would be able to see their impact, even when a strategy blows up due to these leveraging fees. For me limiting leveraging is like going against the capital market line. As basic as that. One can increase the return by borrowing funds as long as the cost of the borrowed funds is less than the added return. Otherwise, it is not even worth it to leverage. @Guy - I agree with you about realistic commissions, but I think you missed a key point. This change to$.001/share is representative of the real life experience of an algorithm that receives an allocation and is traded by Quantopian on behalf of our clients. I acknowledge that a retail customer can't get that commission from IB, but that's not relevant to the contest. The goal of the contest is to encourage strategies that can earn a future allocation from us. It doesn't make sense for us to constrain the contest to IB's limitations. Outside of the contest, you can use whatever commission structure you feel is most relevant to your purpose.

@Tim and @James There is a little bit of flex built in to the rule - there always has been when it was 3X leverage limit as well. The leverage limit is enforced at 1.1.

@tired The good news is that we have recently added a very powerful way to control leverage - the Optimize API. In the help, check out MaxGrossLeverageand see it in use in the Long-Short Lecture Example. I think that's almost exactly what you are looking for. You still have to call it - you won't get liquidated automatically. But that API does resolve all of the ordering tedium.

@Guy2 - Again, you are free to use whatever leverage you wish on your personal backtesting. We're applying a leverage limit in the contest, that's all. The leverage limit enables a simpler apples-to-apples comparison on the leaderboard.

@Dan in the main post you mentioned "The leverage change is more prosaic: when we evaluate strategies for an allocation, we want to make it easier to compare apples to apples". In the previous post you mentioned "@Guy2 - Again, you are free to use whatever leverage you wish on your personal backtesting. We're applying a leverage limit in the contest, that's all.".

It appeared you wanted to fix leverage to evaluate strategies for allocation, then you mentioned in the earlier post it is just for the contest. Should the algorithms that we backtest (outside of the contest) need to have leverage 1 to be considered for an allocation? Or are you referring to algorithms in contest being evaluated for allocation. If algorithms that have leverage > 1 will still be considered for allocation, then this rule is just for contest leaderboard - not for the allocation?

Finally, I might actually try to win now...

@Leo All of your backtests get put through our automated screening mechanism for the fund. It's easiest for us to evaluate an algorithm that is running 1X leverage. I'd encourage you, therefore, to just run everything you do at 1X leverage in order to maximize your chance of getting found/selected by our automation.

This isn't a case of black-and-white where leverage will get you excluded from getting an allocation. It's a more a matter of optimization - 1X leverage has the fewest obstacles to evaluation and selection.

I'm happy with these changes except for having the leaderboard filled with exotic colored animals... I rather enjoyed seeing my real name in lights on the leaderboard as well any additional public notoriety and/or validation if afforded me. I do understand the need for those who want anonymity and the near impossible task of allowing users to enter their own alias instead, but I feel that losing the ability to have real names on the leaderboard is an unfortunate remedy to the problem.

As a compromise, I would suggest giving users the option of either using their real names or use the generated code names on leaderboards (forums would remain real name only). Is there a good reason (except for the coding task) not to do this and why this wouldn't satisfy those who wanted to stay anonymous and those that wouldn't mind having more attention paid to the efforts of their hard work?

Thanks Dan -

Regarding the "real names" (Q user names) versus aliases on the leader board, since you have an open, public forum and user profiles, any user wanting to publicize his contest ranking can do so by revealing his user name (or by changing his user name to contain the contest alias, e.g. Grant "Zorro" Kiehne). One problem I see is that you have the potential for nearly an infinite number of contestants (presumably in the billions), whereas there aren't that many code names (and a very limited number of cool ones, like Zorro). So, you'll need to use alpha-numeric codes, I suppose, which is kinda dry and impersonal. Maybe symbols (e.g. the one adopted by the artist formally known as Prince)?

Regarding the leverage constraint, a bit more guidance would be helpful. Specifically, based on a backtest or tear sheet, is there a way to tell if the algo would conform to the 1.1 absolute limit (assuming out-of-sample, it performs similarly)? Or is it a matter of writing some code, to check the leverage every minute of the backtest?

These colored animal names are ridiculous.swing and a miss on that one. As long as we have the ability to Change our user name, why would these forces animal names be necessary? If someone has t double being harassed then, they can obscure their own name.

Yeah I don't buy it either -- that a bunch of contest entants are so smart that they can write algorithms that totally game the stock market but couldn't figure out they could keep their identities obscure by using pseudonyms. (I figured that out after my first forum post--I don't want this stuff to show up under my name on Google.) So I think the issue is that it's Quantopian that wants to keep their star coders' identities unadvertised.

I enjoyed being able to look up leading algo writers on the forum to see what they post about.

@Rob I like my new Quantopian name: I am Sky Blue Hawk. I think you're just disappointed that your new name is Wet Blanket. I've also benefited from having my real name on the leaderboard -- the CEO of the company where I work posted it on our LinkedIn page just two days ago. Still, overall I think the Q team has done a good job of iterating in a controlled fashion. There are tons of moving parts here and they have to balance simplicity and backwards compatibility with the infinite number of tweaks that can be made. I doubt they'll be adding a nice-to-have name-change feature to the leaderboard when they've had stability issues recently and bigger fish to fry. I personally am happy about the new commission model and am looking forward to the new slippage model.

@atiredmachine - I think you nailed the real (sinister) reason for the change:

"I think the issue is that it's Quantopian that wants to keep their star coders' identities unadvertised."

I also agree with your other comment:

"I enjoyed being able to look up leading algo writers on the forum to see what they post about."

I too enjoyed matching quality posts with their respective leaderboard performance. With so many competing ideas/comments it's good to have another way to validate their worthiness with the authors performance.

Ha! @atiredmachine (aka Viridian Hawk) I see what what you did there... Good idea!
EDIT: I now see that @Grant (aka Off-White Seal - lol) mentions this too.

I've changed my real name also to include my newly minted code name. Perhaps other (brave) souls will follow suit. Just seems worthless to have a leaderboard with entries that can't be publicly validated.

Now back to @James comment - why couldn't folks just have done this all along? There is definitely more to this story...

I'm at ease now at least knowing that we can match the two monikers (albeit manually) and can add other info in our profile to validate who we really are- as if anyone really cares :)

LOL - indeed I'm Off-White Seal. In fairness, I think the leadership team needs to take on goofy animal names on https://www.quantopian.com/about.

Regarding the leverage limit, what is the actual pass/fail rule? Is it that for every minute, this condition must be satisfied:

context.account.leverage < 1.1


If so, perhaps this implementation would work:

def initialize(context):

context.leverage_limit = False

def handle_data(context, data):

if not context.leverage_limit:
if context.account.leverage >= 1.1:
context.leverage_limit = True

if context.leverage_limit:
record(leverage_limit = 1)
else:
record(leverage_limit = 0)


Regarding the contest code names, I'm skeptical regarding the rationale provided. It seems that if only a few folks were having difficulties with pestering, but still wanted to use their actual names as Q user names, Q could have simply provided an option of having one's user name not to be applied to the contest (e.g. a check box "Use a code name for the contest"). Applying the code names universally seems kinda odd, given the rationale provided.

I was the winner of the first Quantopian contest, and it led to some interesting connections and opportunities outside of Quantopian. So for me, the pestering was a positive, not a negative.

@olive coyote That has nothing to do with regulations(depending perhaps on your job). And yes, if you have successful, consistent algo, you absolutely want to hold yourself out to investors. There is no regulation that prevents the licensing of a trading algorithm.

Using an algorithm is different than receiving advice. There aren't any realistic regulatory concerns here.

All this talking about top performers in the contests and consequent assumption that those algorithms are the ones Quantopian and institutions are looking for, made me curious. I thought the algorithms selected by Quantopian for their hedge fund were not the top algorithms in the contests. I believe this because it is easier (not easy) to write an algorithm that win a contest than writing an algorithm that satisfies the requirements for an hedge fund, far easier. In this latter case you have to add so many constraints to your algorithm (low market exposure, hundreds of positions, sector neutral and also good performance since 2002) that all the returns are lost in favor of low volatility (that's the reason why you need leverage to make those algorithms worth of investment).

So, I'd be curios to know from Quantopian what position the selected algorithms had in the contest (if they come from a contest at all) when they were selected.

Luca -

I suspect that the probability of having a good 6-month run for a reasonably constructed algo is pretty high. It becomes a game of chance to land in the top three if there is a relatively large field of good competition.

Personally, I don't think in terms of gaming the contest; I just enter what might be o.k. for the fund and hope for the best.

Regarding "good performance since 2002" I don't think they are looking that far back (especially with the recent push to get authors to utilize non OHLCV bar data sets, which tend not to go back to 2002).

Discussion of the over-fitting problem:

It would seem that the contest doesn't adequately reward algos that aren't over-fit. Although algos will roll over into the next contest, they are competing with ones that were submitted only six months prior, which could simply have anomalous out-of-sample performance (of course, there is the possibility of completely undiscovered sources of alpha, but even so, six months probably isn't enough time for such algos to stand out above the noise). The contest is a bit of a dopamine generator for the masses the way it is constructed, to engage users and to promote Quantopian. For example, several times I've submitted entries and seen my algo rank highly as the short-term performance jumps all over the place ("Eureka! I've hit the jackpot!"). I ain't no psychologist, but I have to think that this sort of thing could potentially be working against Quantopian versus for it, since it can reinforce behavior that would not be desirable in fund algo authors (although I guess it may work to hook folks in, who would sober up and become diligent Q research workhorses, versus dopamine-laden risk-taking traders).

So, a few suggestions:

• Somehow reward consistency with respect to the backtest, and out-of-sample longevity in the contest (the mechanics need to be fleshed out here to avoid a limited number of authors/algos getting prizes repeatedly month after month).
• Don't reveal rankings until the end of the contest (i.e. aid users in a applying a disciplined "Set it and forget it" mentality).
• For the top N contestants of a given contest, provide written algo analysis and recommendations, vis-a-vis the fund expectations, as part of the their reward (it is a "teachable moment" that could be seized). Of course, this would only make sense if the contest is well-aligned with the fund.

Agreed, it seems the Q Open will disproportionately reward algos that take a risk that happens to pay off during the 6-month out-of-sample instead of rewarding more conservative, well-rounded algos that would perform consistently, though not as well, during all market conditions. The more risk, the more potential payoff. It's almost necessary to win.

I've also noticed that Q Open winning algos often place back in the hundreds during the backtest. Wouldn't it be smart if an additional criteria for winning the Q Open would be that algos showed consistency between backtest and out-of-sample? Wouldn't that help filter out algos that place in first due to luck or idiosyncratic risk?

Agree that consistency should be one of the ranking factors. It would screen out algorithms that had lucky results during the contest period as well as algorithms with extreme backfitting. Think the research paper might also support this argument.

conservative, well-rounded algos that would perform consistently, though not as well, during all market conditions.

Well, one probably needs decades of data for that one, since the market has a way of doing funky things on a long time scale. I guess the argument is that only algos based on recent, novel data sets are particularly interesting; anything else will have run its course. Obviously, using as much data as possible is prudent, combined with a "strategic intent" story that describes the use of a potential new alpha source. This is potentially a serious limitation of the contest, at this point, since there are relatively few free data sets--the fund is looking for users to apply novel data sets, but to use them, users would have to pay. Perhaps Q could publish which contest algos are using which data sets? It would be interesting to see how many of the paid ones are being applied, and if they confer any advantage.

I know, this should not be said, I might have to pay for it, but here it is anyway.

In one swoop, Quantopian has not only anonymized, but also trivialized everyone participating in any of its contests.

Do you think any professional, except Quantopian's own, will consider the outcome of a simulated trading strategy when the author has for codename: Red Shark, Blue Snake, or Rosy-Turbo Jackass.

Seriously, would you have any kind of confidence in Scarlet Jackal, or, for that matter, Green Dik Dik proposing you to put your money at risk? Yes, I can see it: here is my esteemed trading program developer, Pink-Slip Buzzard. Let me tell you that once he has his claws on your money, he will take care of it as if it was already his own.

Just for fun, here is a readymade elevator pitch: Hi, I'm Pink Sarcastic Fringehead, and this is my associate, Dark “Vader” Croc. We are very honest somethings and want to manage your 100M fund. Our strategies are among the best on Quantopian's simulated contest 35. They bare our respective names. As you will notice, our results are much better than Purple Lumpsucker. Hope you will see the sincerity and integrity behind our proposal and let us manage your millions.

The option should have been: choose a pseudonym yourself, if you want one. That's it.

Otherwise, simply continue using your name.

We should have been provided with a choice.

I will have a hard time putting the word trust in the same sentence as one of those namesakes. I know I am going to discriminate.

If you are not proud of your work, or did shit, then please, do hide, I understand, no problem. Or ask Quantopian to remove your contest entry (which, as Quantopian, I would not do, but then again).

By anonymizing, Quantopian is also opening its doors wide open to all the bad stuff people can say on the net knowing that they have the cover of anonymity. I can not say bravo to that. Already anyone can make a fake account on a pseudonym and say whatever they want. It has been done before. It should be considered bad enough.

Let people stand behind what they say. Let them know we know who they are. And, let them know they have said it in an opened forum for all to see.

Quantopian is doing a disservice to itself and mostly to all its members. How seriously will anyone consider whatever is produced in those tests under those childish aliases? What kind of credibility will a Purple Orangutan or a Yellow Rat bring to a trading strategy? Even by association, Quantopian strategy developers might be ridiculed.

Also, this anonymity move makes it open grounds for anyone wanting to troll for whatever purpose they may have, good or bad. All they need is to open a fake account, and under anonymity do whatever they want, contact whom ever they want. And based on that, how can I trust anyone if I can not determine their true identity? Just as a due diligence process, I would have to discard whatever they have to say. And since I would not know who has a fake id, I would have to apply a simple rule: trust no one.

If anyone wants to gain anyone's trust, they should simply start with being honest. And it starts with no deceptions, no camouflage, no aliases.

There is some serious rethinking needed here.

A professional site would have started with no pseudonyms at all. Only real people with real identities. No fake ids. Period.

I have not participated in any contest, so I really don't care.

After all, it is Quantopian's website, it is their business, and they can do whatever they want.

But... and I think it is a serious but...

Quantopian has said the anonymity thing was to answer a request by several people. I have not seen a number anywhere, but I do not think it is many out of its 100,000+ members.

So, here is one of the group of “several other people” that do not like the fake stuff.

I am already losing interest in the forums I listen to, losing interest in answering to animals, losing confidence in their posts due to these fake identities. Why should I waste my time on anonymous people? What good does it serve? What kind of relationship or discussions can be established based on fake names?

It took months and months to identify who was brilliant in their approach, who had a keen understanding of what they did and where I respected their opinions and level of knowledge.

And now, Quantopian just threw all that away too. It simply cut the historical link, the memory link of who said what and how to trace it. How long will it take to forget who was who going forward?

Did I forget to say I think anonymizing is a bad move?

That is me again expressing a different point of view. My name is: Guy Fleury, I am a real person, it is not a fake name, and I stand behind what I say.

It should be my choice to be identified by my proper name, not Quantopian's.

At the same time, Quantopian is also telling me something I did not want to hear. I think we all rely on the notion that our programs are our own and that Quantopian will respect the fine line that represent seeing and not seeing our code.

It is a very flimsy line when those programs are simple text files that do not really reside on our machines, and where they have kept the ability to view this code, even if it is only on our request, they say. This alone undermines the trust needed on the integrity of our code and could render all contests a total waste of time since they can read the code. We really do need confidence in the integrity of what Quantopian says or has to say.

For “several people”, it might not be a problem, but I think Quantopian has nonetheless just taken the first steps in crossing a very thin line.

This also makes the line murky on the program ownership thingy. Try to explain to a judge that your program is registered under a Rosy-Turbo Jackass. How would you justify your claim? It is hard enough to have your program on someone else's machine. A computer glitch and it is gone.

My advice: keep a copy on your machine of any program you think might have value. Anything you cloned was given away free and is public domain.

My thanks to all who do share their programs and notebooks. They are great learning tools, some are real hidden gems in the rough.

@Guy, I thought yours was a pseudonym play on Guy Fieri. :P

I appreciate your point of view, and I agree to the extent that I think people should be allowed to have their real names on the letterboard. It's something people are proud of, and pride is a big motivator.

But I disagree that people shouldn't be able to use pseudonyms if they so choose. Not everybody wants their personal information out in public, and it can be for any number of reasons, but especially because Q deals with money, and money is private.

For one, the letterboard essentially screens and appraises the value of algorithms. Were your algorithm displaying phenomenal performance, a sophisticated criminal might want to break into your house and steal your computer in order to gain access to your lucrative secret sauce. Since you're so adamant that there's no legitimate reason to obfuscate your identity online, I'd like to point out that it didn't take me more than a couple minutes to find your (publicly available) phone number and home address (52x-62xx / 27x rue de Bxxxxxxx). Furthermore, a hacker might attempt to do the same kind of theft remotely. I for one do not want to invite that kind of hassle into my life.

It goes the other way as well. In the process of discussing algorithms on the forum you might disclose how much money you are trading in your brokerage account and what kinds of returns you're getting, simply because alpha and algorithm capacity are important issues we all work with. Criminals aside, you might not want people from your real-world life being able to glean this kind of information about your finances online either. You might not want people in your life to know you have a winning algorithm for the same reason you'd be better off if nobody knew you won the lottery. Winning the lottery always ends tragically. It's human nature -- when people perceive that you are receiving tons of money "for free," they feel entitled to some of it. The dynamics of your relationships are more likely to change for the worse. So if I were to fall into some algorithmic money, I wouldn't want people Googling me to be able to find out about it.

I could go on and on, but I think it would be flogging a dead horse. There are plenty of legitimate reasons for why you would want to be anonymous online. I was actually kind of pissed off when I posted on the forum for the first time and my real name was attached to the post. I thought to myself -- what the hell? I gave that information privately to Quantopian, not for them to give it to the rest of the world.

The option should have been: choose a pseudonym yourself, if you want one. That's it.
Otherwise, simply continue using your name.
We should have been provided with a choice.

Technically, pretty straightforward, I'd think--add a tick box on https://www.quantopian.com/account. And in the rankings, simply put the pseudonym in quotes to indicate that the user chose not to use his Q user name. It is hard not to feel a bit cynical that the motivation in making the change had something to do with trying to obscure the identities of top performers (but maybe Dan's explanation above is the full story).

One issue now is that some folks are changing their user names to match their assigned contest pseudonyms. So, there is a loss of continuity of identities within the community. Perhaps this response wasn't anticipated, but it could have been avoided.

Last friday, I entered contest 32 with an adapted version of an algorithm, but I don't see that one on the leaderboard (yet)?

@Veridian, if anybody wanted to find me, they certainly would not have to dig very far with a website and all. But have no fear, nobody has knocked on my door.

I understand your point of view, and accept your motivation for privacy. It is your choice. Hopefully it was a pseudonym you chose. I am also a very private person. But, if I put something in your face, I think you are entitled to know I said it and hold me accountable.

Anonymity removes the responsibility for what you might say. And some use this anonymity to say whatever. BTW, have you noticed the disappearance of “atiredmachine”? Which is another way for staying anonymous, changing ids all the time. As said before, there is a lost of history, of continuity when doing that. And, I am interested in what you have to say.

Anyone on Quantopian wishing to be really anonymous could simply not say a word. Use the tools provided whichever way they want. They would not even be traceable. It is the case for the vast majority on the site (>90%).

But, if you have something to say, I for one, want to talk to real people, no camouflage.

All I am saying is: it should have been our choice, not Quantopian's or any small group of individuals. Just as it should not have been you or me deciding for everyone.

I am not interested in answering a question coming from some Purple Lumpsucker. I would find it a total waste of time. Even if, in this case, I might just have made an exception.

With the 1x cap, does that mean we no longer are required to hedge (short)?
The reason I ask is that the official rules say (here):

"Your algorithm must be hedged to the market. It should hold both long
and short positions simultaneously, or be entirely in cash. "

So if capped at 1x, that automatically means we're entirely in cash, so the hedging is not required?
Thanks.

No it means cash as an alternative to stock positions, not cash trades as opposed to margin trades.

Hi Dan -

I was just skimming over https://www.quantopian.com/open and noticed:

Your algorithm must use the default slippage and commission models.

Is the default commission now set_commission(commission.PerShare(cost=0.001, min_trade_cost=0)) or is there a separate default commission that is applied when an algo is entered into the contest? I'd added set_commission(commission.PerShare(cost=0.001, min_trade_cost=0)) explicitly to my code, thinking that the default commission had not changed, since the help page states:

The default commission model for US equities is PerShare, at $0.0075 per share and$1 minimum cost per order. The first fill will incur at least the minimum commission, and subsequent fills will incur additional commission.

If set_commission(commission.PerShare(cost=0.001, min_trade_cost=0)) is more appropriate for your prime broker, wouldn't it make sense to make it the default?

We agree! We are in the process of changing the default behavior of the commissions model.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@Josh In the meantime, is it OK to leave
 set_commission(commission.PerShare(cost=0.001, min_trade_cost=0))  in our contest submissions? It's nicest to have identically the same code submitted to the competition as what is used in back-testing.

It appears I am a little late to this discussion, but I agree with some of the above comments about the option of keeping your real name on the leaderboard. IMHO, it is an important incentive for potential competitors that they be able to compete openly. At the very least, Q's best competitors are frequently discussing among themselves. With a disguised ranking, it's not clear who's got the credentials. Even if I change my name to Taylor "Bright Blue Ostrich" Smith, who's to say I'm the real "Bright Blue Ostrich"?

I do recognize that Quantopian is not in the business of finding its community jobs. They are in the business of discovering talented programmers, funding their algorithms, and earning a return on their portfolio of algorithms. The Quantopian Open, as I see it, is mainly for discovering talent, and talented individuals will be more likely to want to compete in the contest if they can be discovered.

On a separate note, I agree with the other changes. In reality, leverage comes at the cost of borrowing capital, and not all algorithms will utilize leverage equally. Furthermore, I've personally noticed instances where an algorithm of mine could have performed better in the competition were the commission model more realistic. Another welcome change!

Just double checking if the settings set_commission(commission.PerShare(cost=0.001, min_trade_cost=0)) are the new defaults?