Back to Community
I Am Interested In Building My Own Algorithm Taking A Concentrated Value Approach. Anyone Been Interested In This?

I am interested in building an algorithm focusing on a concentrated value approach...

This would mean 3-8 stocks with:
-little or no debt
-high return on invested capital
-long-term competitive advantage

Has anyone else been interested in this side of things?

4 responses

Value investing is always interesting, but it doesn't really work in that form any more (at least not in a Quant strategy). It's also hard to quantify a "long-term competitive advantage" using the companies fundamentals as it's kind of subjective.

Clone Algorithm
6
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data import morningstar
from quantopian.pipeline.data import builtin 
from quantopian.pipeline.filters import QTradableStocksUS
import quantopian.pipeline.factors as Factors
from quantopian.pipeline.data.builtin import USEquityPricing
import pandas as pd
def initialize(context):
    context.leverage = 1.0
    schedule_function(rebalance, date_rules.month_start(), time_rules.market_close(hours=1))
    schedule_function(record_vars, date_rules.month_start(), time_rules.market_close())
    attach_pipeline(make_pipeline(), 'my_pipeline')
def make_pipeline():
    base_universe = QTradableStocksUS()
    USEP = builtin.USEquityPricing
    base = USEP.volume.latest
    Base_Lower_Bound = 70
    Base_Upper_Bound = 100
    Filter_1 = morningstar.balance_sheet.current_debt.latest
    Filter_1_Upper_Bound = 30
    Filter_1_Lower_Bound = 0
    Filter_1 = Filter_1.percentile_between(Filter_1_Lower_Bound, Filter_1_Upper_Bound, mask = base_universe)
    Filter_3 = morningstar.operation_ratios.roic.latest
    Filter_3_Upper_Bound = 100
    Filter_3_Lower_Bound = 70
    Filter_3 = Filter_3.percentile_between(Filter_3_Lower_Bound, Filter_3_Upper_Bound, mask = base_universe)
    long_mask = Filter_1 & Filter_3
    longs = base.percentile_between(Base_Lower_Bound, Base_Upper_Bound, mask=long_mask)
    return Pipeline(
        columns = {
            'LONGS': longs,
        },
        screen = base_universe
    )
    return Pipeline()
def before_trading_start(context, data):
    context.output = pipeline_output('my_pipeline')
    context.longs = context.output[context.output['LONGS']].index
    context.long_weight = assign_weights_longs(context)
def assign_weights_longs(context):
    context.weight = len(context.longs)
    if context.weight > 0:
        long_weight = context.leverage / (len(context.longs))
    if context.weight > 0:
        return long_weight

def record_vars(context, data):
    longs = 0
    for position in context.portfolio.positions.itervalues():
        if position.amount > 0:
            longs += 1

    record(leverage=context.account.leverage, long_count=longs)
def rebalance(context,data):
    for security in context.portfolio.positions:
        if security not in context.longs and data.can_trade(security):
            order_target_percent(security, 0)
    for security in context.longs:
        if context.long_weight > 0:
            if data.can_trade(security):
                order_target_percent(security, context.long_weight)
There was a runtime error.

Mainly thought you might like some logging for a preview/snapshot of fundamentals' values to see what's there.

2006-01-04 05:45 log_pipe:104 INFO len 48  
2006-01-04 05:45 log_pipe:105 INFO  
            min      mean       max  
LONGS      1.00      1.00      1.00      <= True turns up as 1 here. Now always True, so I might have broken something  
cdebt      0.00    644253.59    3000000.00  
 roic      0.04      0.08      0.62  
2006-01-04 05:45 log_pipe:118 INFO _ _ _   LONGS   _ _ _  
    ... LONGS highs  
                     LONGS     cdebt      roic  
Equity(448 [APA])     True  274000.0  0.060162  
Equity(779 [BCR])     True  100000.0  0.055362  
Equity(14596 [ELNK])  True   16000.0  0.073824  
Equity(15101 [CHKP])  True  200000.0  0.046861  
    ... LONGS lows  
                       LONGS      cdebt      roic  
Equity(7612 [ANDV])     True  3000000.0  0.085788  
Equity(7904 [VAR])      True  2689000.0  0.087473  
Equity(8612 [CHS])      True   332000.0  0.074730  
Equity(26578 [GOOG_L])  True    10000.0  0.059850  
2006-01-04 05:45 log_pipe:118 INFO _ _ _   cdebt   _ _ _  
    ... cdebt highs  
                    LONGS      cdebt      roic  
Equity(7612 [ANDV])  True  3000000.0  0.085788  
Equity(7904 [VAR])   True  2689000.0  0.087473  
Equity(1072 [BR])    True  2000000.0  0.068464  
Equity(16389 [NCR])  True  2000000.0  0.091225  
    ... cdebt lows  
                     LONGS  cdebt      roic  
Equity(7364 [TDW])    True    0.0  0.045372  
Equity(6127 [PPP])    True    0.0  0.154208  
Equity(24617 [KOMG])  True    0.0  0.072857  
Equity(1593 [CLE])    True    0.0  0.046998  
2006-01-04 05:45 log_pipe:118 INFO _ _ _   roic   _ _ _  
    ... roic highs  
                     LONGS     cdebt      roic  
Equity(20680 [AKAM])  True  420000.0  0.623556  
Equity(6127 [PPP])    True       0.0  0.154208  
Equity(21697 [NTRI])  True  169000.0  0.133684  
Equity(16140 [VPHM])  True    8334.0  0.129058  
    ... roic lows  
                     LONGS      cdebt      roic  
Equity(23483 [ANT])   True  1650000.0  0.044302  
Equity(1747 [COGN])   True    32000.0  0.043266  
Equity(6008 [PKD])    True    24000.0  0.041066  
Equity(23709 [NFLX])  True    68000.0  0.040553  
Clone Algorithm
1
Loading...
Backtest from to with initial capital
Total Returns
--
Alpha
--
Beta
--
Sharpe
--
Sortino
--
Max Drawdown
--
Benchmark Returns
--
Volatility
--
Returns 1 Month 3 Month 6 Month 12 Month
Alpha 1 Month 3 Month 6 Month 12 Month
Beta 1 Month 3 Month 6 Month 12 Month
Sharpe 1 Month 3 Month 6 Month 12 Month
Sortino 1 Month 3 Month 6 Month 12 Month
Volatility 1 Month 3 Month 6 Month 12 Month
Max Drawdown 1 Month 3 Month 6 Month 12 Month
from quantopian.algorithm import attach_pipeline, pipeline_output
from quantopian.pipeline  import Pipeline
from quantopian.pipeline.data import Fundamentals
from quantopian.pipeline.data import builtin
from quantopian.pipeline.filters import QTradableStocksUS
import quantopian.pipeline.factors as Factors
from quantopian.pipeline.data.builtin import USEquityPricing
import pandas as pd

def initialize(context):
    context.leverage = 1.0
    schedule_function(rebalance, date_rules.month_start(), time_rules.market_close(hours=1))
    schedule_function(record_vars, date_rules.month_start(), time_rules.market_close())
    attach_pipeline(make_pipeline(), 'pipeline')
    
def make_pipeline():
    m    = QTradableStocksUS()   # m for mask
    
    cd_Lower_Bound   =  0 ; cd_Upper_Bound   =  30
    roic_Lower_Bound = 70 ; roic_Upper_Bound = 100
    Base_Lower_Bound = 70 ; Base_Upper_Bound = 100
    
    base = USEquityPricing.   volume.latest
    cd   = Fundamentals.current_debt.latest
    roic = Fundamentals.        roic.latest
    
    # &= means add this to the existing mask
    m    &= cd  .percentile_between(  cd_Lower_Bound,   cd_Upper_Bound, mask=m)
    m    &= roic.percentile_between(roic_Lower_Bound, roic_Upper_Bound, mask=m)
    longs = base.percentile_between(Base_Lower_Bound, Base_Upper_Bound, mask=m)
    
    return Pipeline( screen = m & longs,
        columns = {
            'LONGS': longs,
            'cdebt': cd,
            'roic' : roic,
        },
    )

def before_trading_start(context, data):
    context.output = pipeline_output('pipeline')
    context.longs  = context.output[context.output['LONGS']].index
    context.long_weight = assign_weights_longs(context)

    #if 1 or 'log_pipe_done' not in context:       # show pipe info every day
    if 'log_pipe_done' not in context:       # show pipe info once
        df = context.output ; num = 4
        log_pipe(context, data, num, df) #, details=['bbr', 'alpha'])

def assign_weights_longs(context):
    context.weight = len(context.longs)
    if context.weight > 0:
        long_weight = context.leverage / (len(context.longs))
    if context.weight > 0:
        return long_weight

def record_vars(context, data):
    longs = 0
    for position in context.portfolio.positions.itervalues():
        if position.amount > 0:
            longs += 1

    record(leverage=context.account.leverage, long_count=longs)
    
def rebalance(context,data):
    for security in context.portfolio.positions:
        if security not in context.longs and data.can_trade(security):
            order_target_percent(security, 0)
            
    for security in context.longs:
        if context.long_weight > 0:
            if data.can_trade(security):
                order_target_percent(security, context.long_weight)

def log_pipe(context, data, num, df, details=None):
    c = context
    c.log_pipe_done = 1 ; log_nan_only = 0 ; show_sectors = 0
    if not len(df):
        log.info('len {}'.format(len(df)))
        return

    if isinstance(df, pd.Series):
        log.info('       min      mean       max   Series {}'.format(df.name))
        nan_count = len(df[df != df])
        nan_count = 'NaNs {}/{}'.format(nan_count, len(df)) if nan_count else ''
        log.info('    {}    {}    {}   {}'.format(
           ('%.2f' % df.min()) .rjust(6),
           ('%.2f' % df.mean()).rjust(6),
           ('%.2f' % df.max()) .rjust(6),
           nan_count
        ))
        return

    nans = 0
    content_min_max = ('\n            min      mean       max\n')
    for col in df.columns:
        if col == 'sector' and not show_sectors: continue
        nan_count = len(df[col][df[col] != df[col]])
        nan_count = 'NaNs {}/{}'.format(nan_count, len(df)) if nan_count else ''
        content_min_max += ('{}    {}    {}    {}   {}\n'.format(col.rjust(5),
           ('%.2f' % df[col] .min()).rjust(6),
           ('%.2f' % df[col].mean()).rjust(6),
           ('%.2f' % df[col] .max()).rjust(6),
           nan_count
        ))
        if nan_count: nans = 1
    if log_nan_only and nans or not log_nan_only:
        log.info('len {}'.format(len(df)))
        log.info(content_min_max)

    if details == None: details = df.columns
    for detail in details:
        if detail == 'sector': continue
        hi = df.sort_values(by=detail, ascending=False).head(num)
        lo = df.sort_values(by=detail, ascending=False).tail(num)
        content  = ''
        content += ('_ _ _   {}   _ _ _'  .format(detail))
        content += ('\n\t... {} highs\n{}'.format(detail, str(hi)))
        content += ('\n\t... {} lows \n{}'.format(detail, str(lo)))
        if log_nan_only and not len(lo[lo[detail] != lo[detail]]):
            continue  # if no nans
        log.info(content) ; nans = 1
    if (log_nan_only and nans) or not log_nan_only:
        log.info('len {}'.format(len(df)))
There was a runtime error.

@Scott In my limited but successful personal investing experience over the past 4 years, value investing in 10 to 30 stocks is a discretionary trading method which requires examination of 10K/Q and related news. Quantopian fundamentals data is an excellent initial filter for rank ordering value as an initial screen of the top several hundred stocks. What I personally like about this approach is: 1) I know why I enter each of my positions, and can choose a technical (price movement) entry; 2) except for malfeasance or a surprise, I expect each position will recover from a price dip and maintain value within its historical price range ; and 3) I have sufficient market diversification. I am always looking for new opportunities, and when one arises, I sell a position that has either had an extraordinary short term increase or an extraordinary longer term without any price increase to fund the new opportunity. For this approach, you would use a research notebook for the initial screen. Algorithms are for non-discretionary strategies.

I agree algorithims would be great for initial screening...

I have no problem with the qualiative factors, but i do find the quantitative part much harder...

Maybe a value algorithm would be a winner...