作者:书友48919914 | 来源:互联网 | 2023-09-07 17:57
WhenIloadthefileintojson,pythonsmemoryusagespikestoabout1.8GBandIcantseemtogett
When I load the file into json, pythons memory usage spikes to about 1.8GB and I can't seem to get that memory to be released. I put together a test case that's very simple:
当我将文件加载到json中时,pythons将内存使用量峰值提升到大约1.8GB,我似乎无法释放该内存。我整理了一个非常简单的测试用例:
with open("test_file.json", 'r') as f:
j = json.load(f)
I'm sorry that I can't provide a sample json file, my test file has a lot of sensitive information, but for context, I'm dealing with a file in the order of 240MB. After running the above 2 lines I have the previously mentioned 1.8GB of memory in use. If I then do del j
memory usage doesn't drop at all. If I follow that with a gc.collect()
it still doesn't drop. I even tried unloading the json module and running another gc.collect.
很抱歉我无法提供示例json文件,我的测试文件有很多敏感信息,但对于上下文,我正在处理240MB的文件。运行上面的2行后,我使用了前面提到的1.8GB内存。如果我然后做del j内存使用不会下降。如果我用gc.collect()跟着它,它仍然不会掉线。我甚至尝试卸载json模块并运行另一个gc.collect。
I'm trying to run some memory profiling but heapy has been churning 100% CPU for about an hour now and has yet to produce any output.
我正在尝试运行一些内存分析,但是堆积已经在100%CPU上搅拌了大约一个小时,现在还没有产生任何输出。
Does anyone have any ideas? I've also tried the above using cjson rather than the packaged json module. cjson used about 30% less memory but otherwise displayed exactly the same issues.
有没有人有任何想法?我也尝试过使用cjson而不是打包的json模块。 cjson使用的内存减少了约30%,但显示的问题完全相同。
I'm running Python 2.7.2 on Ubuntu server 11.10.
我在Ubuntu服务器11.10上运行Python 2.7.2。
I'm happy to load up any memory profiler and see if it does better then heapy and provide any diagnostics you might think are necessary. I'm hunting around for a large test json file that I can provide for anyone else to give it a go.
我很高兴加载任何内存分析器,看看它是否比堆更好,并提供您认为必要的任何诊断。我正在寻找一个大型测试json文件,我可以为其他任何人提供它。
1 个解决方案