[FRPythoneers] Are lists threadsafe?

Cameron Laird claird at lairds.com
Mon Mar 8 18:19:43 MST 2004

> From frpythoneers-bounces at lists.community.tummy.com Mon Mar 08 13:37:25 2004
> Envelope-to: claird at phaseit.net
> X-pair-Authenticated:
> From: "Jim Baker" <jbaker at zyasoft.com>
> 		.
> 		.
> 		.
> Yes, but...

> >From the Python Cookbook 6.1:

> "However, adding threads to a Python program to speed it up is often not a
> successful strategy. The reason for this is the Global Interpreter Lock
> (GIL), which protects Python's internal data structures. This lock must be
> held by a thread before it can safely access Python objects. Without the
> lock, even simple operations (such as incrementing an integer) could fail.

> Therefore, only the thread with the GIL can manipulate Python objects or
> call Python/C API functions. To make life easier for programmers, the
> interpreter releases and reacquires the lock every 10 bytecode instructions
> (a value that can be changed using sys.setcheckinterval). The lock is also
> released and reacquired around I/O operations, such as reading or writing a
> file, so that other threads can run while the thread that requests the I/O
> is waiting for the I/O operation to complete. However, effective
> performance-boosting exploitation of multiple processors from multiple
> pure-Python threads of the same process is just not in the cards. Unless the
> CPU performance bottlenecks in your Python application are in C-coded
> extensions that release the GIL, you will not observe substantial
> performance increases by moving your multithreaded application to a
> multiprocessor machine."

> The GIL noted above will indeed protect reads & writes to lists &
> dictionaries, to prevent corruption to the underlying data structures.
> Queues are really for distributing work to threads.  However, you will need
> to use some sort of locking scheme if you want atomicity for a larger set of
> ops.  I would turn to the Python Cookbook for some good recipes there.  The
> ReadWriteLock they have is not bad.

> Finally, the received wisdom is that concurrent I/O is significantly more
> scalable and easier with an async event framework like Twisted.  (You're
> probably not doing something compute bound with Python, I would imagine...)
> However, we do have to work with threads when working with say mod_python on
> Windows and some globals around for database caching.
> 		.
> 		.
> 		.
1.  I'm a big proponent of that wisdom:  I frequently 
    advocate concurrency-through-event-orientation,
    and do indeed claim it both more scalable and 
2.  The quoted package often gets me to giggling.  To
    observe that "adding threads ... is often not a
    successful strategy" is, it strikes me, uncharac-
    teristically coy of the *Cookbook*.  I'll be expli-
    cit:  introduction of threads can sometimes--perhaps
    even frequently--SLOW performance, not just fail to
    improve it.

More information about the FRPythoneers mailing list