You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using anaconda and my pandas version is 0.23.1. When dealing with single large file, setting chunksize or iterator=True works fine and memory usage is low. The problem raises when I am trying to dealing with 5000+ files (file names are in filelist):
The memory usage raises very soon and exceeds 20GB+ quickly. However, trajectory = [open(f, 'r')....] and reading 10000 lines from each file works fine.
I also tried low_memory=True option but it's not working. Both engine='python' and memory_map=<some file> options solve the memory problem but when I use the datas with