My point is that when it comes to stereotypical database logic, there are no advantages to using [asyncio] versus a traditional threaded approach, and you can likely expect a small to moderate decrease in performance, not an increase.
The benchmarks he ran are interesting, but initially I thought they missed the biggest advantage of asyncio: trivial concurrency through coroutines. I gave a talk about that at PyCon 2014 (slides here). I think what asyncio enables is remarkable. Later on, Mike linked me to another reply of his and now I think I understand where he's coming from.
Basically: If you're dealing with a transactional database, why would you care about the type of concurrency that asyncio enables? When you're using a database with strong consistency, you need to wrap all operations in a transaction, you need to provide a clear ordering of execution, locking, etc. He's right that asynchronous Python doesn't help in this situation alone.
But what if you want to mix queries through SQLAlchemy with lookups in Memcache and Redis, and simultaneously enqueue some background tasks with Celery? This is when asyncio shines: When you want to use many disparate systems together with the same asynchronous programming model. Asyncio makes it trivial to compose these types of infrastructure.
Asyncio won't win in benchmarks that focus on raw performance, as Mike showed. But asyncio will be faster in practice when there are parallel RPCs to distributed systems. It's Amdahl's law in action. What I mean specifically is cases where you issue N coroutines and wait for them all later:
def do_work(ip_address, session_id): # Issue two RPCs session_future = memcache.get(session_id) location_future = geocoder.lookup(ip_address) # Wait for both done, _ = yield from asyncio.wait([session_future, location_future]) session, location = done # Now do something with both results ...
The majority of my experience in asynchronous Python comes from the NDB library for App Engine, which was a precursor to asyncio and is very similar. In that environment, you can access all of the APIs (Database, memcache, URLFetch, Task queues, RPCs, etc) with a unified asynchronous model. Our codebase that uses NDB employs asynchronous coroutines almost everywhere. That makes it simple to combine many parallel RPCs into workflows and larger pipelines.
Here's a simplified example of one pipeline from my day job. You can think of this as a very basic search engine. Note how many parallel coroutines are executed.
- Receive an HTTP request
- In parallel:
- Send RPC to geocode the IP address
- Send RPC to lookup inbound IP in a remote database
- After receiving response, in parallel:
- Lookup N rate limiters in N memcache shards
- Lookup N rate limiters in N memcache shards
- Return whether inbound IP is over rate limits
- After receiving response, in parallel:
- Lookup the user's session in memcache
- If it's missing, create the new session object, then in parallel:
- Enqueue a task to save the session to the DB
- Populate the session into memcache
- Set the user session response header
- Enqueue a task to save the session to the DB
- Return the session (new or existing)
- If it's missing, create the new session object, then in parallel:
- Send RPC to geocode the IP address
- Wait for geocode and session RPCs to finish
- In parallel:
- Do N separate queries based on the user's attributes
- First, look in memcache for cached data by attribute
- If memcache is missing or empty, do a database query
- Look up query results in rate limiting cache
- First, look in memcache for cached data by attribute
- Do N separate queries based on the user's attributes
- As queries finish (i.e., asyncio.as_completed)
- Rank results by relevance
- Return best result after all queries finish
- Rank results by relevance
- Wait for rate limit check from #2b above
- If the rate limits are over, return the 503 response and abort
- If the rate limits are over, return the 503 response and abort
- In parallel:
- Update result rate limiting caches
- Enqueue task to log ranking decision
- Enqueue task to update user's session in database
- Update user's session in memcache
- Start writing response
- Update result rate limiting caches
- Wait for all coroutines to finish
This pipeline has grown a lot over time. It began as a simple linear process. Now it's 5 layers "deep" and 10 parallel coroutines "wide" in some places. But it's still straightforward to test and expand because coroutines make the asynchronous boundaries clear to new readers of the code.
I can't wait to have this kind of composability throughout the Python ecosystem! Such a unified asynchronous programming model has been a secret weapon for our team. I hope that Mike enables asyncio for SQLAlchemy because I want to use it along with other tools, asynchronously. My goal isn't to speed up the use of SQLAlchemy alone.