Notes on CPython Lists

As I was learning to program, Python lists seemed totally magical to me. I imagined them as being implemented by some sort of magical datastructure that was part linked-list, part array that was perfect for everything.

As I grew as an engineer, it occurred that this was unlikely. I guessed (correctly) that rather than some sort of magical implementation, it was just backed by a resizable array. I decided to read the code and find out.

One of the nice things about CPython is the readable implementation. Although the relevant file is over 2000 lines of C, it’s mostly the sorting algorithm, and boilerplate to make the functions callable from Python code.1 The core list operations are short and straightforward.

Here are a few interesting things I found reading the implementation. Code snippets below come from the CPython source with explanatory comments added by me.

List Resizing #

If you append to a Python list and the backing array isn’t big enough, the backing array must be expanded. When this happens, the backing array is grown by approximately 12%. Personally, I had assumed this growth factor was much larger. In Java ArrayList grows by 50% when expanded2 and in Ruby, Array grows by 100%.3 The Python implementation optimizes for memory usage over speed. Another reason to preallocate Python lists when possible.

// essentially, the new_allocated = new_size + new_size / 8
new_allocated = (size_t)newsize + (newsize >> 3) + 
    (newsize < 9 ? 3 : 6);

link to CPython reallocation code

Inserting at the beginning of the list #

Inserting at the beginning of a list takes linear time – this isn’t that surprising given the rest of the implementation, but it’s good to know that some_list.insert(0,value) is rarely a good idea. A Reddit user reminded me of Deques which trade constant time insert and remove from both ends in exchange for constant time indexing.

// First, shift all the values after our insertion point 
// over by one
for (i = n; --i >= where; ) 
  items[i+1] = items[i];
// Increment the number of references to v (the value we're inserting) 
// for garbage collection
Py_INCREF(v);
// insert our actual item
items[where] = v;
return 0;

link to CPython insertion code

Creating List slices #

Taking a slice of a list eg. some_list[6:50]is also a linear time operation in the size of the slice, so again, no magic. You could imagine optimizing this with some sort of copy-on-write semantics but the CPython code favors simplicity:

for (i = 0; i < len; i++) {
  PyObject *v = src[i];
  Py_INCREF(v);
  dest[i] = v;
}

link to CPython slice code

Slice Assignment #

You can assign to a slice! I’m sure this is commonly known among professional Python developers, but I’ve never run into it in several years of python programming. I only discovered when I came across the list_ass_slice(...) function in the code. However, watch out on large lists – it needs to copy all items deleted from the original list which will briefly double the memory usage.

>>> a = [1,2,3,4,5]
>>> a[2:4] = 'a'
>>> a
[1, 2, 'a', 5]

>>> a = [1,2,3,4,5]
>>> a[2:4] = ['replace1', 'replace2', 'replace3']
>>> a
[1, 2, 'replace1', 'replace2', 'replace3', 5]

Sorting #

Python arrays are sorted with an algorithm known as “timsort”. It’s wildly complicated and described in detail in a side document in the source tree. Roughly, it builds up longer and longer runs of sequential items and merges them. Unlike normal merge sort, it starts by looking for sections of the list that are already sorted (runs in the code). This allows it to take advantage of input data that is already partially sorted.
An interesting tidbit: For sorting small arrays (or small sections of a larger array) of up to 64 elements4, timsort uses “binary sort”. This is essentially insertion sort, but using binary search to insert the element into the correct location. It’s actually an O(n2) algorithm! An interesting example of real-world performance winning over algorithmic complexity in practice. link

Did I miss something cool about CPython lists? Let me know in the comments.

Thanks to Leah Alpert for providing suggestions on this post.


  1. I came across the boilerplate generation code while I was writing this post and it’s super cool! A python-based C-preprocessor generates and maintains macros to do argument parsing and munging between Python and C 

  2. Open JDK 6: int newCapacity = (oldCapacity * 3)/2 + 1; Open JDK 8 int newCapacity = oldCapacity + (oldCapacity >> 1); 

  3. https://github.com/ruby/ruby/blob/0d09ee1/array.c#L392 

  4. https://github.com/python/cpython/blob/1fb72d2ad243c965d4432b4e93884064001a2607/Objects/listobject.c#L1923 

 
92
Kudos
 
92
Kudos

Now read this

No Magic: Regular Expressions, Part 3

Evaluating the NFA # In part 1, we parsed the regular expression into an abstract syntax tree. In part 2, we converted that syntax tree into an NFA. Now it’s time evaluate that NFA against a potential string. NFAs, DFAs and Regular... Continue →