python – What is the difference between two for loops: when removing elements while traversing the list

Question:

Why does the interpreter in the first case remove only 3 zeros ['1', '0', '0', '0'] , and in the second case it removes completely, what's the difference? For – works with each iterable object in turn, why does it skip 3 zeros?

list1 = ['1', '0', '0', '0', '0', '0', '0', '0', '0', '1']
for _ in list1:
    list1.remove('0') 
print(list1)

nuly = ['3', '2', '0', '0', '3', '5', '0', '0', '0', '8', '0'] 
for _ in nuly:
    nuly.remove('0')
print(nuly)

Answer:

Both cycles are the same. Only the input lists are different. You remove elements from the list while traversing it.

When traversing the list, the for loop does not copy it, but creates an iterator that returns the elements from the list one by one . So when you remove an element ( .remove('0') looks for a single '0' in the list and removes it), the list changes and those changes are visible in the iterator.

In the current implementation of Python, a list iterator stores a reference to the list itself and the current index in it. Elements are returned as long as the length of the list is greater than the current index. Here is the gist of the next(list_iterator) call , which returns the next element on each iteration:

listiter_next(listiterobject *it)
{
    ... 
    if (it->it_index < PyList_GET_SIZE(it->it_seq)) {
        item = PyList_GET_ITEM(seq, it->it_index);
        ++it->it_index;
        return item;
    }
    ...
}

What in Python looks like:

if i < len(lst):
    item = lst[i]
    i += 1
    return item

If you follow the code on pythontutor.com step by step:

#XXX BROKEN
lst = [0, '0', '0', '0', '0', '0', '0', '0', '0', 9]
for i, x in enumerate(lst):
    lst.remove('0') 
    size = len(lst)
print(lst) # -> [0, '0', '0', '0', 9]

You can see that the loop continues as long as i < size . Therefore, the loop may terminate before all '0' elements have been removed.


If you want to remove '0' from the list, then the usual way is:

lst[:] = [x for x in lst if x != '0']

If you don't create a temporary list, you can loop through the list, move the values ​​you want to keep to the front of the list, and then remove all elements at the end :

def remove_all(seq, value):
    pos = 0
    for item in seq:
        if item != value:
           seq[pos] = item
           pos += 1
    del seq[pos:]

remove_all(lst, '0')

Both solutions are linear O(n) . The first solution requires O(nk) additional memory, where k = lst.count('0') .

If it is known that in a large list, only a few values ​​need to be removed ( k is small and does not depend on n ), then del lst[i] removal can be used, traversing the list in reverse order (since the removal does not affect the elements at the beginning of the list) :

for i in reversed(range(len(lst))):
    if lst[i] == '0':
        del lst[i] # O(n) * k == O(n * k)

In general, this is a quadratic O(n**2) algorithm.

What is wrong with quadratic algorithms

Quadratic solutions can be noticeably slower for large n than linear ones.

For example, a linear algorithm for a list with a million elements requires no more than C1 * 1000_000 steps (instructions), while a quadratic algorithm C2 * 1000_000_000_000 , where C1, C2 are constants that do not depend on the size of the input list. C1, C2 are roughly (order of magnitude) equal in this case, so the linear algorithm is much more preferable if k ~ n .

If a million instructions are executed in about a millisecond (you won’t even have time to blink), then the quadratic algorithm will take a whole day if someone has the patience to wait or the battery does not run out until the execution is over.

A million elements is not some big input in modern conditions (phones have gigabytes of memory).

As a rule, you can ignore constants ( C1 , C2 in the example) outside hot spots (hot spots), for example, if the constant changes by an order of magnitude (10 times), then a million instructions of the linear algorithm will take 10 times longer: ~ 10 milliseconds (all equals faster than you can blink) and much less than many hours for a quadratic algorithm with ~10 12 operations.

Summing up: when writing an algorithm, it is worth focusing on simplicity, readability, and whether it can, in principle, complete the task. Micro-optimizations that cripple the code, improving only the constant ( C1 , C2 in the example), are better not to do, unless the profiler says otherwise. If it is not known in advance that the input is limited in size, then you should pay attention to the order of growth (big O) of the algorithm used. In particular, if this does not noticeably complicate the implementation, then linear algorithms ( O(n) ) are much more preferable than quadratic ones ( O(n*n) ). Real world examples: https://accidentallyquadratic.tumblr.com/

Scroll to Top