Back to Community
Quick and dirty way to find tops and bottoms of time series

I couldn't find any code to do this, so I wrote a crude way of algorithmically finding "turning" points in time series data.

Summary of the code:

  1. Smooth the time series using curve fitting
  2. Find first derivative using forward difference method
  3. Find bottoms and tops by iterating and looking for crosses in the 1st derivative from - to + and + to -.

This could be extended to automatically detect support and resistance levels in a trading strategy.
Check out the attached notebook!

Loading notebook preview...
Notebook previews are currently unavailable.
6 responses

Another even simpler way of finding turning points is to use "fractals" (a highly misleading name). I have used these in the past to find support/resistance levels in FX market and they work quite well.

http://www.investopedia.com/articles/trading/06/fractals.asp

Thanks, Mikko. Much easier and more reliable.

Any suggestions on how you go about finding support/resistance? I was thinking of looking at the histogram of the turning points and extracting levels that have lots of points. Then, invalidating the support/resistance points if they've already been broken.

Loading notebook preview...
Notebook previews are currently unavailable.

Hey guys, this is very cool but the Notebook doesn't work, I am getting this error

--------------  
IndexErrorTraceback (most recent call last)  
<ipython-input-36-a8992727d9f0> in <module>()  
----> 1 bottoms, tops = plotTurningPoints(asset, 1)

<ipython-input-35-a12f49a2439b> in plotTurningPoints(price_series, n)  
     14     ax.plot(price_series)  
     15     ticks = ax.get_xticks()  
---> 16     ax.set_xticklabels([price_series.index[i].date() for i in ticks[:-1]]) # Label x-axis with dates  
     17     ax.set_title('Share Price with Turning Points')  
     18     ax.plot([b[0] for b in bottoms], price_series[[b[0] for b in bottoms]], 'gD')

/usr/local/lib/python2.7/dist-packages/pandas/tseries/base.pyc in __getitem__(self, key)
    190         getitem = self._data.__getitem__  
    191         if lib.isscalar(key):  
--> 192             val = getitem(key)  
    193             return self._box_func(val)  
    194         else:

IndexError: index 735842 is out of bounds for axis 0 with size 160  

Found this - haven't tested it tho. It's from2015, might need reformatting:
https://www.quantopian.com/posts/how-to-code-resistance-areas

@ Randy Walker I love your code, I'm thinking of filtering it by areas of major and minor support and resistance. The way I would go about this is to compare the standard deviation of the absolute distance between pivots. What do you think about this? I feel like the standard deviation of the distance is the way to go but I could be wrong.

Here is the process I'm thinking about:
MODIFICATION PROCESS:
1. Incorporate pivot area method that returns a list of raw pivots
in order along with a classification of a top or bottom.

  1. Assign a distance variable for each pivot area by measuring the
    distance to the last pivot and the next pivot if any. Avg out?
    return a list with the corresponding data.

  2. Using the python Statistics library compile the data and assign
    a standard deviation for each pivot. Sample size is the desired
    time period.

  3. Separate pivot list into tops and bottoms by using given methods.

  4. For tops and bottoms lists designate major or minor based on size
    of standard deviation.

  5. Plot out using horizontal lines. Done!