发布于2024-12-08 09:44 阅读(559) 评论(0) 点赞(10) 收藏(2)
I just noticed that the zeros
function of numpy
has a strange behavior :
%timeit np.zeros((1000, 1000))
1.06 ms ± 29.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit np.zeros((5000, 5000))
4 µs ± 66 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
On the other hand, ones
seems to have a normal behavior.
Is anybody know why initializing a small numpy array with the zeros
function takes more time than for a large array ?
(Python 3.5, numpy 1.11)
This looks like calloc
hitting a threshold where it makes an OS request for zeroed memory and doesn't need to initialize it manually. Looking through the source code, numpy.zeros
eventually delegates to calloc
to acquire a zeroed memory block, and if you compare to numpy.empty
, which doesn't perform initialization:
In [15]: %timeit np.zeros((5000, 5000))
The slowest run took 12.65 times longer than the fastest. This could mean that a
n intermediate result is being cached.
100000 loops, best of 3: 10 µs per loop
In [16]: %timeit np.empty((5000, 5000))
The slowest run took 5.05 times longer than the fastest. This could mean that an
intermediate result is being cached.
100000 loops, best of 3: 10.3 µs per loop
you can see that np.zeros
has no initialization overhead for the 5000x5000 array.
In fact, the OS isn't even "really" allocating that memory until you try to access it. A request for terabytes of array succeeds on a machine without terabytes to spare:
In [23]: x = np.zeros(2**40) # No MemoryError!
作者:黑洞官方问答小能手
链接:https://www.pythonheidong.com/blog/article/2046412/1f2cec359e04e81fd5d9/
来源:python黑洞网
任何形式的转载都请注明出处,如有侵权 一经发现 必将追究其法律责任
昵称:
评论内容:(最多支持255个字符)
---无人问津也好,技不如人也罢,你都要试着安静下来,去做自己该做的事,而不是让内心的烦躁、焦虑,坏掉你本来就不多的热情和定力
Copyright © 2018-2021 python黑洞网 All Rights Reserved 版权所有,并保留所有权利。 京ICP备18063182号-1
投诉与举报,广告合作请联系vgs_info@163.com或QQ3083709327
免责声明:网站文章均由用户上传,仅供读者学习交流使用,禁止用做商业用途。若文章涉及色情,反动,侵权等违法信息,请向我们举报,一经核实我们会立即删除!