from [SO](http://stackoverflow.com/questions/30587026/when-reading-huge-hdf5-file-with-pandas-read-hdf-why-do-i-still-get-memory) ``` In [2]: df = DataFrame(np.random.randn(1000,2),columns=list('AB')) In [3]: df.to_hdf('test.h5','df',mode='w',format='table',data_columns=True) In [4]: i = pd.read_hdf('test.h5','df',chunksize=10) In [5]: i Out[5]: <pandas.io.pytables.TableIterator at 0x108f397d0> In [6]: i.coordinates Out[6]: Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 990, 991, 992, 993, 994, 995, 996, 997, 998, 999], dtype='int64', length=1000) ``` These coordinates could be represented by `slice(0,1000,10)` in this case