Generación asíncrona de procesos: pregunta de diseño: apio o trenzado

All: I'm seeking input/guidance/and design ideas. My goal is to find a lean but reliable way to take XML payload from an HTTP POST (no problems with this part), parse it, and spawn a relatively long-lived process asynchronously.

The spawned process is CPU intensive and will last for roughly three minutes. I don't expect much load at first, but there's a definite possibility that I will need to scale this out horizontally across servers as traffic hopefully increases.

I really like the Celery/Django stack for this use: it's very intuitive and has all of the built-in framework to accomplish exactly what I need. I started down that path with zeal, but I soon found my little 512MB RAM cloud server had only 100MB of free memory and I started sensing that I was headed for trouble once I went live with all of my processes running full-tilt. Also, it's got several moving parts: RabbitMQ, MySQL, cerleryd, ligthttpd and the django container.

I can absolutely increase the size of my server, but I'm hoping to keep my costs down to a minimum at this early phase of this project.

As an alternative, I'm considering using twisted for the process management, as well as perspective broker for the remote systems, should they be needed. But for me at least, while twisted is brilliant, I feel like I'm signing up for a lot going down that path: writing protocols, callback management, keeping track of job states, etc. The benefits here are pretty obvious - excellent performance, far fewer moving parts, and a smaller memory footprint (note: I need to verify the memory part). I'm heavily skewed toward Python for this - it's much more enjoyable for me than the alternatives :)

I'd greatly appreciate any perspective on this. I'm concerned about starting things off on the wrong track, and redoing this later with production traffic will be painful.


preguntado el 08 de enero de 11 a las 17:01

What's the goal of your project? Academic? Hobbyist site? Internet startup? -

It'll be a revenue-generating service, or that's the goal at least. -

In which case I'd point you toward this article:… -

Note that Celery 2.2 will support using eventlet/gevent instead of processes to do concurrency, which may dampen your memory fears. -

@MattH, thanks for the link, I agree with this approach and I think it makes the most sense. -

3 Respuestas

On my system, RabbitMQ running with pretty reasonable defaults is using about 2MB of RAM. Celeryd uses a bit more, but not an excessive amount.

In my opinion, the overhead of RabbitMQ and celery are pretty much negligible compared to the rest of the stack. If you're processing jobs that are going to take several minutes to complete, those jobs are what will overwhelm your 512MB server as soon as your traffic increases, not RabbitMQ. Starting off with RabbitMQ and Celery will at least set you up nicely to scale those jobs out horizontally though, so you're definitely on the right track there.

Sure, you could write your own job control in Twisted, but I don't see it gaining you much. Twisted has pretty good performance, but I wouldn't expect it to outperform RabbitMQ by enough to justify the time and potential for introducing bugs and architectural limitations. Mostly, it just seems like the wrong spot to worry about optimizing. Take the time that you would've spent re-writing RabbitMQ and work on reducing those three minute jobs by 20% or something. Or just spend an extra $20/month and double your capacity.

Respondido el 08 de enero de 11 a las 21:01

Thanks for the input, much appreciated. I'll continue down my current path and probably end up paying for more resources. As of now, skipping innodb on mysql seems to have helped some, and I'll be able to get things going much more quickly via the celery route. - mcauth

I'll answer this question as though I was the one doing the project and hopefully that might give you some insight.

I'm working on a project that will require the use of a queue, a web server for the public facing web application and several job clients.

The idea is to have the web server continuously running (no need for a very powerful machine here). However, the work is handled by these job clients which are more powerful machines that can be started and stopped at will. The job queue will also reside on the same machine as the web application. When a job gets inserted into the queue, a process that starts the job clients will kick into action and spin the first client. Using a load balancer that can start new servers as the load increases, I don't have to bother about managing the number of servers running to process jobs in the queue. If there are no jobs in the queue after a while, all job clients can be terminated.

I will suggest using a setup similar to this. You don't want job execution to affect the performance of your web application.

Respondido el 08 de enero de 11 a las 21:01

I Add, quite late another possibility: using Redis. Currently I using redis with twisted : I distribute work to worker. They perform work and return result asynchronously.

The "List" type is very useful :

So you can use the Reliable queue Pattern to send work and having a process that block/wait until he have a new work to do(a new message coming in queue.

you can use several worker on the same queue.

Redis have a low memory foot print but be careful of number of pending message , that will increase the memory that Redis use.

Respondido 19 ago 14, 14:08

No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas or haz tu propia pregunta.