I see that they produce the same result, is the difference in the speed? Why are there 2 functions for the same?
> range(5) [0, 1, 2, 3, 4] >>> xrange(5) xrange(5) >>> for i in range(5): ... print i ... 0 1 2 3 4 >>> for i in xrange(5): ... print i ... 0 1 2 3 4
xrange produce the same result but in a different way.
As you may have already guessed, the
range function returns a list:
>>> type(range(10)) list
The range function will occupy the amount of memory according to the size of the range that you pass as a parameter.
On the other hand, the
xrange function returns its own data type, the xrange-type :
>>> type(xrange(10)) xrange
Well, there is not much science behind
xrange , in fact it does not have any difference with respect to
range in terms of performance, the advantage is that
xrange will always occupy the same amount of memory (RAM) regardless of the size of the range:
The xrange type is an immutable sequence which is commonly used for looping. The advantage of the xrange type is that an xrange object will always take the same amount of memory, no matter the size of the range it represents. There are no consistent performance advantages.
In summary, the ultimate goal of both functions is to return lists, but we could say that
xrange does it on demand due to its "lazy" or "lazy" nature.
In both cases there is support for the iteration protocol since both have the
>>> r = range(10) >>> r [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> r.__iter__ <method-wrapper '__iter__' of list object at 0xb603fdcc> >>> xr = xrange(10) >>> xr xrange(10) >>> xr.__iter__ <method-wrapper '__iter__' of xrange object at 0xb600e4d0>
So the following cases are equivalent:
>>> for item in r: ... print item ... 0 1 2 3 4 5 6 7 8 9 >>> for item in r.__iter__(): ... print item ... 0 1 2 3 4 5 6 7 8 9
The same alpica for
xr . The big difference is that when you iterate over
r you are doing it over a list that has already been previously evaluated (and loaded in memory) and when you iterate over
xr you are doing it over a "loose" list that passes the values to you as you go. needing (loading into memory one at a time).
Now this may not make much sense with a range of 10 integers, but give it a try with a few million and you will notice the difference in your RAM.
I know this is not part of the initial question but it seemed pertinent to add it to the answer since generators are objects that finally implement the iteration protocol.
They are similar to comprehension lists but are created using parentheses instead of brackets.
>>> lista = [1, 2, 3, 4, 5] >>> [x**2 for x in lista] [1, 4, 9, 16, 25] >>> (x**2 for x in lista) <generator object <genexpr> at 0xb60427fc>
Now, generators work in a similar way to
xrange since they are also "lazy" and only return the value as you need it without loading everything into memory by using the