Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Windows 10 approach - apply pain to the users until they "upgrade". Python 3 hasn't made it on its own merits, and we have fanboys like this trying to figure out some way to force people to upgrade.

Library porting to Python 3 did not go well. Many Python 2.x libraries were replaced by different Python 3 libraries from different developers. Many of the new libraries also work on Python 2. This creates the illusion that libraries are compatible across versions, but in fact, it just means you can now write code that runs on both Python 2 and Python 3. Converting old code can still be tough. (My posting on this from last year, after I ported a medium-size production application.[1] Note the angry, but not useful, replies from Python fanboys there.)

Python 3, at this point, is OK. But it was more incompatible than it needed to be. This created a Perl 5/Perl 6 type situation, where nobody wants to upgrade. The Perl crowd has the sense to not try to kill Perl 5.

Coming up next, Python 4, with optional unchecked type declarations with bad syntax. Some of the type info goes in comments, because it won't fit the syntax.

Stop von Rossum before he kills again.

[1] http://www.gossamer-threads.com/lists/python/python/1187134



What kills me is, they decided to make breaking changes, and they still kept the damn global interpreter lock! It's like saying "Sorry guys, it's 2016, and you have to port all your code, but no parallelism for you!"

Beatings will continue until morale improves...


My understanding is that it doesn't make practical sense to remove the GIL; doing so would result in a slowdown of the interpreter[1]:

  Back in the days of Python 1.5, Greg Stein actually implemented a
  comprehensive patch set (the “free threading” patches) that removed the GIL
  and replaced it with fine-grained locking. Unfortunately, even on Windows
  (where locks are very efficient) this ran ordinary Python code about twice as
  slow as the interpreter using the GIL. On Linux the performance loss was even
  worse because pthread locks aren’t as efficient.
 
  Since then, the idea of getting rid of the GIL has occasionally come up but
  nobody has found a way to deal with the expected slowdown, and users who
  don’t use threads would not be happy if their code ran at half the speed.
  Greg’s free threading patch set has not been kept up-to-date for later Python
  versions.
> no parallelism for you!

This is a bit of a hyperbole; Python supports both hardware threads and processes, both of which can be used to achieve parallelism. Despite the GIL limiting Python code to 1 CPU, many I/O routines will release the GIL while they perform I/O meaning I/O can be done in parallel, and native code can release the GIL to do long computations. Processes can be used to overcome the GIL directly in Python, and the standard library offers support to make this as easy as it is to launch a thread, as well as some higher-level support for parallel-mapping a function across a pool of processes.

[1]: https://docs.python.org/3/faq/library.html#can-t-we-get-rid-...


The global interpreter lock is the price paid for being able to monkey-patch code and objects from one thread while they are being used in another. That falls out of the original CPython interpreter, which is a naive interpreter in which everything is a dictionary. It would be "un-Pythonic" to change that.


> The global interpreter lock is the price paid for being able to monkey-patch code and objects from one thread while they are being used in another.

So, in other words, “we don't trust our users to do concurrency correctly, and we need to keep our system safe in spite of that”?


I'm not aware of any popular scripting language that supports utilizing multiple processors without starting new processes. Is this a thing? I don't do a lot of parallel stuff with Python but multiprocessing has met my needs.


> I'm not aware of any popular scripting language that supports utilizing multiple processors without starting new processes.

Really, this is more about implementations than languages; a fairly-popular Ruby implementation (JRuby) supports native threading (and has no GIL); IIRC the main Perl 6 implementation also.

Elixir is up and coming in popularity (don't know if you consider it "scripting", which is a relatively fuzzy-bounded category), and definitely supports utilizing multiple processors without starting new OS processes (it uses "processes" as that term is used in the Erlang ecosystem, which are a different thing, basically M:N green threads.)


"Jython and IronPython have no GIL and can fully exploit multiprocessor systems" - https://wiki.python.org/moin/GlobalInterpreterLock

Granted, neither implementation has the popularity of CPython.


And both are sadly stuck on Python2, last I checked :(


Admittedly it's not all that popular, but Tcl can do this with no problems at all.

(By the way, Tcl also has an integrated event loop as well as a complete implementation of coroutines, both of which Python only very recently got.)


First, there is a solid argument to be made that if you need the performance of multi-threading, you shouldn't be doing it in Python anyway.

Second, parallelism in Python is not hindered beyond threading (e.g. multi-processing works just fine: https://www.youtube.com/watch?v=gVBLF0ohcrE).

Finally, removing the GIL is not trivial and the other changes to the language that mandated breaking backwards compatibility are pretty worth it (IMO).


GIL is a design decision for the interpreter, not a caveat of the language itself.

The reference implementation of most compilers/interpreters is usually the slowest one, because it has to be legible to read the code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: