Ruby Background Tasks with Starling

At Inquisix, we help sales professionals exchange trusted referrals. To do that requires several background tasks, some that could take 10-15 minutes to process. Obviously, I can’t make a client wait for that, so I needed a system that could handle background tasks. At first, I started with backgroundrb, and it worked just fine. Backgroundrb was in production for two months while Inquisix grew. However, there were a few things about backgroundrb that bothered me:

  • It uses a lot of memory. Every worker creates at least one process. Plus, there is a master process to watch everything and deal with communication. It doesn’t take much before you end up with 5-6 processes. I had to upgrade my test server just to deal with the extra memory requirements.
  • It’s not easy to build a queue with control over threads without creating a ton of processes.
  • Too many times, I wanted to do something pretty straight forward, but I had to dig through the backgroundrb code to figure out the backgroundrb way. For example, don’t ever call sleep in a backgroundb thread pool. You need to call next_turn instead.

After a while, I decided to look for a simpler way that would scale better without using so much memory. I decided a more traditional queue system would work better for me. At a former company, I built up an enterprise system based on queues that processes millions of transactions across dozens of servers. Something based on queues would work for me, but I did not want to take on the complexity of JMS, ActiveMQ (or some other queue), ActiveMessaging, etc. As usual with Ruby projects, I looked around on the web. Within a few minutes, I came across Starling and Sparrow. Both a Ruby queue systems using the memcached interface. That means I can use the memcache-client gem that I already use. Starling was developed at Twitter for background processing, so I figure it’s got some testing behind it. Sparrow is newer, but basically the same. However, there isn’t much experience with Sparrow, so I settled on Starling as my queue server.

To install Starling:

sudo gem install starling
sudo gem install memcache-client

Now, I needed a way to use my new queue server for background tasks. Again, a few minutes of looking, and I found Workling. It didn’t have everything I wanted, but it was nice and simple, and it had almost everything I wanted. I use Piston for all my plugins, so here is how to install with that:

piston import vendor/plugins/workling
svn commit -m "added workling"

Make sure you commit now because we will be making some changes to Workling later. Piston will get confused and toss your changes if you don’t commit first.

Client Code

First, create a worker in app/workers/my_worker.rb

class MyWorker < Workling::Base
  def do_something_big(options = {})

Anything in app/workers that inherits from Workling::Base will get picked up automatically as a worker. Workers are basically listeners on a Starling queue. By default, Workling defines queues based on class and method. There will be a queue for every method in every class that inherits from Workling::Base.

Now, you can call your worker asynchronously anywhere like so:

MyWorker.asynch_do_something_big(:some_arg => 5)

Starling Runner

To use the Workling’s Starling runner, you need to setup your environment like so:

Workling::Remote.dispatcher =

I add this line to all my environment files (development.rb, etc.). Workling is nice in that if you comment out the above line, all the MyWorker.asynch_* calls will become synchronous calls — nice for debugging!

The Starling runner takes care of several things:

  1. Mapping of queue names to worker code. this is done with Workling::ClassAndMethodRouting, but you can change the queue routing pretty easily.
  2. There’s a client daemon that waits for messages and dispatches these to the responsible workers. if you intend to run this on a remote machine, then just check out your rails project there and start up the Starling client.

Now, fire up Starling, your app, and the workling runner, and your are processing background tasks. Don’t forget to edit config/starling.yml first to tell Workling where Starling is running.

sudo starling -d
script/workling_starling_client start

What I ended up with was much better for what I was doing. This combination processed my background tasks faster and more reliably. It is much easier to add new workers and call them. Finally, it uses a whole lot less memory, so my end user application performs better. Basically, it wins on all fronts for me.

Next time, I will share the changes I made to Workling to support threads and provide the necessary configuration to ensure that everything stays running in production.






112 responses to “Ruby Background Tasks with Starling”

  1. Dave Avatar

    A quick update on memory. With backgroundrb, I could barely run 4 mongrel instances for my app on my 1 gig production server. Now, with Starling and Workling I can pretty easily run 7 with room to spare. It makes sense given that my backgroundrb installation had 4-5 rails processes, and now I have one rails (Workling) and one small ruby process (Starling).

  2. Rajkumar Avatar

    An awsome plugin to use with Starling.But when i tried to use it asynchronously,i am facing a problem.(synchronous call works fine)FAILED to process queue uploadedsong_workers:my_queue. # could not handle invocation of my_queue with {:uid=>”uploadedsong_workers:my_queue:dd5ed62c727f319f4ca89cdce2edd291″}: wrong number of arguments (1 for 0).

    I have a workling UploadedsongWorker with a method my_queue.I am calling this worker in following way.
    I could not find where the problem is.Can somebody help me.

  3. Dave Avatar

    Make sure your worker methods always have an options parameter. For example:
    def process_my_task(options = {})

    Workling is always going to try and pass at least one key to the options hash.

  4. Rajkumar Avatar

    Thanks for a timely help Dave.But now i am stuck with a different problem.i am getting a NoMethodError in classandmethodRouting class

    WORKLING: runner could not invoke UploadedsongWorker:my_queue with {:uid=>”uploadedsong_workers:my_queue:f90e2826d76cc5202fc11eb05fe16412″}. error was: #
    i have created a worker with class name – UploadedsongWorker (apps/workersuploadedsong_worker.rb).

    I think i have missed some naming convention.Can you also help me on this ?
    Thanks in advance.

  5. Rajkumar Avatar

    sorry Dave, I am facing the same problem even after adding parmaeter to methods in workers def my_queue( options = {})

    But same code works fine for synchronous call.What may be the problem in asynchronous call.
    getting following exception
    FAILED to process queue uploadedsong_workers:my_queue. # could not handle invocation of my_queue with {:uid=>”uploadedsong_workers:my_queue:e85d05a67bc961265b8c8713d4c5ef03″}: wrong number of arguments (1 for 0).

  6. Rajkumar Avatar

    sorry Dave, I am facing the same problem even after adding parmaeter to methods in workers def my_queue( options = {})
    How should i proceed.Is there anything i am missing

  7. Dave Avatar

    Please post your class definition and where it is located. Remove anything you don’t want to share.
    The rules for workers are only:

    1. Worker classes must be in the app/workers directory.

    2. Worker classes need to be named “MyClassWorker”. Replace “MyClass” with your class name. Don’t use underscores in the class name.

    3. They must inherit from Workling::Base.

    The error you’re getting (uploadedsong_workers:my_queue could not handle…) makes me think you don’t have your class using camel case. Something like class UploadedSong should have mapped to uploaded_song for a queue name.

  8. Rajkumar Avatar

    Dave,once again thanks for helping me.It will be great if this issue is solved for me.I will give more details
    The following model code is in app/models/uploadedsong_observer.rb

    class UploadedsongObserver

  9. Rajkumar Avatar

    Dave this is /apps/workers/uploadedsong_worker.rbclass UploadedsongWorker

  10. Dave Avatar

    OK, now I see the problem. Your class needs to be defined like this:
    class UploadedsongWorker < Workling::Base # methods end

    Without the Workling::Base, the starling remote Discover class will not be able to find your class. Calling it directly (not remote) will still work because /app/workers is in the path.

  11. Rajkumar Avatar

    Sorry Dave,this is my worker class.Already i had it extended from Workling.Still i face the problem.
    class UploadedWorker

  12. Rajkumar Avatar

    class UploadedsongWorker

  13. Rajkumar Avatar

    Sorry Dave,if i try to give you the code,it is getting filtered when i submit my comment

  14. Dave Avatar

    Can you email it to me? My email is dave @ [this web site’s domain].

  15. nil Avatar

    I’m also monkeying around with Workling and Starling (and Sparrow, which seems even better at first glance). Very slick. Thanks for calling it out.
    However, when I try to pass a model object in the options hash, the poller process tells me that it “could not handle invocation of method_name with nil: No connection to server”. If I pass in an identifier and call find to look up the AR object, then it works fine.

  16. Dave Avatar

    Sparrow looks great as well, but I feel better with starling because it’s been around longer. You should also check out the very different ways they perform the same task. Starling makes use of threads while Sparrow does network events (like backgrounrb).
    Regarding your model object, it’s not a good idea to pass model objects across processes (don’t forget your worker is in a separate process). That’s the no connection to server problem. To pass models around, you should always pass the ID, and then do a find in your worker to bring it back. It’s fine to pass large hashes and regular classes around, but models should be with an ID only.

  17. nil Avatar

    well, marshaling of the object was the expected behavior. But point taken.

  18. Dave Avatar

    Yes, the object is marshalled, but the connection does not marshal well. The attributes marshal fine, but the model’s database connection and state do not make it across to queue .

  19. Julien Avatar

    Is there any way to use Starling / Workling for a task that we want our app to do forever? like for example, parsing feeds?
    If so, how to do it?

    My idea was initially to create a worker that will pull a task (parse_feed) and a feed(my_feed) and that parses the feed and after puts back this task and this feed in the queue? What do you think?

  20. Dave Avatar

    I suppose you could, but if all your application is going to do is parse feeds without any interaction with the web application, then you might be better off using the daemons gem to build a background process. The background process would do nothing but sit in a loop parsing feeds and putting them someplace. See for daemons info.
    Now, you you want feed parsing to be kicked off by some user interaction, then workling and starling could be put to very good use.

    I would need to understand more of what you need before making any recommendation.

  21. Julien Avatar

    Thanks Dave for the reply…
    Well my application is pretty simple : it receives feeds (urls) from another application through an API. And then, it parses the feed “continuously” -or at least very often, max every 30 minutes- to check if there are new items. When new items are detected, it “posts” these new items to a third Rails application… And that’s pretty much it.

    The 2 big constraints that I have : 1) many many feed (up to 100k), 2) speed : a feed should be parsed at least every 30 minutes!

    Thanks for your help
    Julien (julien DOT genestoux AT gmail DOT com)

  22. Dave Avatar

    This doesn’t sound like something workling would be good for. You could probably fake it by posting messages back to yourself, but it wouldn’t be the most efficient.
    When you say “often” is there a minimum as well. Say you had something really efficient. Would you really want to parse continously? I’m guessing you would want some minimum time (every 15 minutes, max every 30 minutes). Am I right?

    I could see something like:

    – Your first app’s API dumps the feeds to parse into a database
    – Build a daemon that polls the db for feeds that haven’t been parsed in 15 minutes, sort by when last parsed (oldest first). Each feed to parse is sumped on a queue for processing. You could use starling for this.
    – Build another daemon that does nothing by listen on the queue, parse feeds, and hand them off to your third application.

    I split out the daemons because you can add more of the second daemon to makes sure you keep up with all the parsing.

    Feel free to email me for more. My email is dave AT [this domain]

  23. Julien Avatar

    Thanks Dave!
    Here is what I actually did, since I need to parse many feeds at the same time.
    First I took your pateched version of Workling

    I created 10 ParserWorkers (workling), and 1 dispatch worker (daemon),

    First, each feed has a “frequency” and I record the time the last parse.

    My DispatchWorker runs continuously and fecth all the feeds that needs to be parsed (based on the frequency and the last parse). After, it sends this to any of the worker randomly.

    The ‘ultimate’ thing would be to be able to know the size of the queue for each worker and launch dynamically new workers if the sizes of the queues are too big… but I really don’t think it’s possible to launch dynamically new workers…

    Anyway, thanks again!

  24. Dave Avatar

    No problem. Glad I could help.
    Regarding dynamically adjusting workers, you could do that with a daemon pretty easily, but I’m not sure right off how to do that with workling. Note, you can call memcache stats on starling, and you will get a bunch of information regarding all the queues (including size). My only concern might be that it may not be as simple as firing up more workers. Eventually, you may get CPU starved.

  25. prathap Avatar

    This addresses Julien’s post.

    1) This might be a very obvious but does the starling/workling pair work across multiple web servers? That is, starling is a ruby process that runs in memory and maintains the queue, and workling is another rails process that listens to entries in the queue and executes them. Will this seamlessly work as I add more webservers?

    2) I don’t believe daemons work across web servers and so you will have to remember to startup up a daemon when you add a new server. Doesn’t seem like a big deal but if you think about all the other configuration tasks that are required to add another webserver to scale your rails app, remembering to spawn a daemon process is one more task to execute to have your app working properly.



  26. Dave Avatar

    1) Absolutely, you can run starling and workling on multiple servers, and they do not need to be on the web servers. You can run the starling/workling processes anywhere. If you require feedback (progress) from the workling servers on your web site, then you will have to come up with a way to get that across. The existing way will work, but not so great with multiple starling instances. I would probably switch over to memcache to provide feedback to the site.
    2) I’m not sure what you mean by daemons working across web servers, but I could see specifying capistrano roles for workling/starling servers. Perhaps, they always run on the app servers, but they could be another type of app server. Of course, wherever workling and starling run will need to have the appropriate config tasks setup to ensure starling and workling execute, but that should be a simple matter of splitting up the existing god (or monit) config file I provided — different config files based on the server’s role.

  27. prathap Avatar

    Thanks Dave,
    Sorry I might not have been very clear with questions, or maybe you answered it and I didn’t quite grasp it. Just to confirm:

    1) Take memcached for instance. I have it running on machine A which also has a webserver and mysql running on it. If I decide to add another machine, say machine B with only a webserver, all I really need is have memcached-client running machine B, and all requests going to B will know to query the memcached server running on machine A. Similarly, if machine A had starling running on it, when I add machine B, all I would need is have the starling client running on B (that is run: script/workling_starling_client start) so that requests can enqueue jobs on the starling server running on machine A correct?

    2) Sorry my question about daemons was utterly stupid. What we are trying to do is have a daily job that scans through our db tables and runs some analytics/calculations to be displayed on the website on a daily basis. I guess the ideal solution would be to pull out mysql into another machine C that sits in front of machine A and machine B (the webservers), and have a deamon process running on machine C that scans through the db and does the calculations nightly. For this kind of a job, using starling/workling wouldn’t be very efficient (or would it?) because these aren’t jobs triggered by HTTP requests. This is just a nightly analytics jobs. However, I am considering starling/workling for this job because I know for a fact that we will need to employ starling/workling for tasks they are perfect for. So why not just implement it for this nightly job and just keep one solution instead of having a daemon for the nightly job and then implement starling/workling for stuff like video uploading?

    Thanks again,


  28. Dave Avatar

    1) Yes on memcache, but no on the starling. If machine A has memcache server, a webserver, and mysql running on it, and machine B is only a webserver, you would use the same memcache client on machine B to talk to memcache on machine A and starling (wherever that is installed). Usually, I see mysql getting pulled off on its own once multiple web/app servers are added, but here is a possible scenario:
    Machine A: webserver, memcache server, memcache client, mysql, starling, and workling.

    Machine B: webserver, memcache client

    Machine B would call memcache server on machine A and starling on machine A (via memcache client).

    I’m assuming here that your web server also has your ruby application running on it.

    You could also have starling and workling running on each web server. In this case, your app would be configured to call the local starling, and your workling_starling_client would pull from the local starling.

    Once thing to be careful of is that workling_starling_client is kind of misnamed. It isn’t actually the CLIENT for starling. It is more the LISTENER for starling in tradition queue-based processing. Your application uses memcache-client to PUT messages onto the starling queue. workling_starling_client uses memcache-client to PULL messages from the starling queue for processing. With the proper configuration, there are no limits to where or how many starlings and worklings are running.

    2) I have something similar. What I do is have a rake task that calls MyWorker.asynch_my_method. Then I have cron kick off the rake task. This way, your app can kick off the task via an HTTP request as well.

    Even without workling/starling, I use rake tasks for all my background tasks. Once you do that, it’s trivial to kick off scheduled jobs with cron. For example:

    contab -e
    0 4 * * 0 cd /opt/apps/your_app/current && /usr/local/bin/rake RAILS_ENV=production company:update

    Using rake tasks also makes it easy to kick off tasks from capistrano.

  29. prathap Avatar

    Thanks a lot for the clarification and suggestions!

  30. links for 2008-05-16 « Bloggitation Avatar

    […] Ruby Background Tasks with Starling (tags: memcached ruby rails programming 247up) Possibly related posts: (automatically generated)links for 2007-10-23 […]

  31. Bill Harding Avatar

    Hey Dave,
    Great writeup, and even better follow up on these comments. After spending 12+ hours fighting BackgroundRb over the last couple days (and much more than that cumulatively), I am very ready to believe that Workling will be the solution to our asynchronous processing. After getting it preliminarily working within my first hour of reading about it (compared to about a day to get BackgroundRb initially set up), I’m very optimistic.

    The main objective I have that I’m a little bit unclear about whether I can achieve: can Workling run on multiple boxes currently? I presume it is possible to use Starling with multiple boxes by instantiating multiple Starling objects like so:

    — starling.rb (in initializer directory) —
    require ‘memcache’

    STARLING_SERVER_1.set(‘my_queue’,{ :my_data => true})
    STARLING_SERVER_2.set(‘my_queue’,{ :my_data => false})
    — end —

    Now, I haven’t run that code, but it seems like it would probably work. However, I’m not sure how that example would extrapolate to Workling, since the Starling server that Workling uses is specified statically in the yml file.

    If Workling can handle this, it would have a distinct advantage over BackgroundRb, which for all intents and purposes, from my experience, can connect only to one server.

    One other question you may or may not know the answer to: is the data transmitted between a remote Starling and a Rails server just sent as raw data (i.e., packet sniffable, i.e., not good to send sensitive data over the connection)?

    Thanks for any help! Once I get this working, I’m hoping to accumulate as much Workling setup notes on my blog, since there’s something of a scarcity of it right now, except from you and the creator.

  32. Bill Harding Avatar

    P.S. If sensitive data shouldn’t be sent between Starling and Rails servers (as is my guess), I would probably want to setup my Worklings so that my sensitive Workling processes run locally (on the web server), and the non-sensitive Worklings run on a separate server (which saves CPU/memory on the web servers). Maybe this is possible by calling the local Worklings directly by their class name, e.g., MyWorker.asynch_method_call(options)), and the remote Workling through the ol’, :my_action, options)…?

  33. Dave Avatar

    There is no need to write any additional code to have multiple starlings or worklings. To have multiple starlings (on a single machine or multiple), simply execute starling wherever you want (even multiple on the same machine). Then, update your config for multiple starlings:

    listens_on: [,]

    Now wherever your app or workling runs with this config, puts and gets to the queue will randomly access two starling queue servers.

    Running workling on multiple machines requires no changes as long as config/starling.yml points to the appropriate starling(s). If you want to run multiple worklings on a single server, then you need to change script/workling_starling_client. Set :multiple => true, and you should be good to go.

    Essentially, there is no limit to how many starlings and worklings you have running. Also, with the right config, they can run anywhere. If you have a lot of traffic and long running tasks, it’s a good idea to run starling/workling on an app server rather then a web server. I find it easier to pair up starling/workling on app servers rather than separating them as well. How many you need will obviously depend on your app.

    Regarding sensitive data, you are correct that data between your app, starling, and workling is sent in the clear. It’s just like memcached or a database call. Objects are mashalled and sent over a port. As it is now, there is no way to specify servers to run workers on. I’d say your better bet would be to make sure your network is secure to limit concerns about anyone sniffing packets between your servers. If you have multiple servers, then I assume your database is on a separate server. Do you encrypt that traffic?

  34. Bill Harding Avatar

    Dave — thanks for the quick and detailed response, that’s great. Way cool that setting up multiple Starlings/Worklings is as easy as adding an entry to starling.yml. I like the sound of this!
    Right now, all of our servers are hosted, so I can’t just throw up a firewall in front of them and go about my business. Because I haven’t had the time yet to research how to go about encrypting the traffic between our servers (sounds hard), I’m just running the DB and app on a big server, then running other not-sensitive things (memcached, and soon, starling) on other servers that are setup to restrict traffic with iptables. So my fundamental problem remains, which is that I need to either 1) figure out how to encrypt remote calls to the DB (could then have the remote Starling query the DB to get the sensitive data it needs to execute its task) or 2) figure out how to make sure that certain Workling calls run on a local Starling.

    If you have any experience as to what the relative difficulty of those two tasks might be, I’m all ears. Otherwise, a-Googling I go.

  35. Dave Avatar

    I haven’t had the experience of requiring encryption of network traffic. I’ve always solved that problem by securing the network, but I did have the luxury of my own servers. Then again, I was dealing with medical data so hosting was out of the question.
    In your case, you could certainly setup iptables to restrict access, but like you said that doesn’t secure the traffic. What is your hosting service? Most have pretty good policies about not watching traffic, but it all comes down to how much you trust them. Unfortunately, my guess is that it will be very difficult to encrypt all your traffic.

  36. Bill Harding Avatar

    Hey Dave,
    Thanks for the ideas above. I’m now trying to figure out how you go about debugging workling when things go wrong? I’ve got my local server so that it should be connected to a remote starling. My remote starling is running on a machine that has my Rails app on it. I started stuff on the remote machine roughly as you prescribe:

    sudo starling -d -h my_host_ip
    script/server -e production -d
    script/workling_starling_client start

    I’m getting no errors in the production log on my local machine, but the async call basically just seems to disappear. I tried invoking my event both as you describe and as the RailsLodge page describes:

    ScraperWorker.asynch_do_scrape(my_options) AND, :do_scrape, my_options)

    I’ve double-checked that the starling.yml on my local machine points to the right starling IP and that the RemoteRunner stuff is in my environment.rb. I note that on the remote server, my scraper calls are not showing up in my Starling spool directory, so maybe the async call is not connecting to Starling for some reason?

    Since the extent of Workling’s log info seems to be a line here and there in the production.log (and my production.log gives me no guff about this call), and Starling gives me even less logging info, I am not really sure what to do to narrow down problems when mysterious things go wrong. Any advice?

  37. Dave Avatar

    My guess is that something is stopping the traffic from getting to starling. You said that you had iptables on your machines. Do you have port 22122 open on the remote machine? Also, make sure that you are not using for an IP.
    Regarding starling’s log, there is a bug in starling that prevents you from using -v or -l.

    My advice is get things going to starling’s queues. You can see that by watching the file sizes on the queue files grow. After that, add lots of logging to your workers.

  38. Ram Avatar

    I had a similar problem like Bill, but now I can see my queues are getting bigger in the /var/spool/starling directory. But when I run script/workling_starling_client run , I get no output. How do I know if the client is listening? How can I check what the client is seeing?
    BTW, I had this working couple of days ago.


  39. Dave Avatar

    There isn’t any output when run as a daemon. There are two ways to see what may be up:
    1. Run “script/workling_starling_client start -t”. This will execute the server on top so you can see any puts. There aren’t a lot, but you can add a bunch in poller.rb. clazz_listen and dispatch is where most of the fun happens.

    2. Workling will write something to the RAILS_DEFAULT_LOGGER, but most is in the debug log so make sure your log level is set to debug.

    One thing to be aware of is that queues will grow on puts AND gets. When something is put on the queue, it grow by the size of all the args (plus a little). Gets are 1 byte at a time. So if you see your queues growing one byte at a time every second so, the workling server is doing its thing. If the queues are growing by larger chunks at a time, then your app is successfully adding items to the queue.

  40. Thomas Kadauke Avatar

    Hey there,
    Here is a completely different approach to background processing, that can be used with any of the above mentioned background processing or messaging frameworks, like ActiveMQ:

    Instead of implementing a complete background task communication protocol, this solution builds on top of any communication protocol to execute code in a background process. Plus, it is the only solution that I know of, that has DRY error recovery support: If for some reason, the communication to the background task fails, it is possible to run the task in-process or write in on disk for later replay.

  41. Dave Avatar

    Very interesting. Although I disagree that most background solutions are not DRY. In fact, the workling solution allows you to absolutely be DRY by allowing you to use all of your application’s code. In the scheme presented, it is unclear how I use code outside of what is serialized in the background block.

  42. Ram Avatar

    Is there anyway to make sure that the workers get reloaded on every server request. Currently, I have to restart the server after a change to the worker. I tried replacing require with require_dependency but that doesn’t seem to help.

  43. Dave Avatar

    As far as I know daemons will not work that way. I always remember to restart the worker each time by adding the restart of working to any of my capistrano tasks.
    Note, if you are debugging in development, you can turn off asynchronous processing by commenting out:

    #Workling::Remote.dispatcher =

    in your config/environments/development.rb file. That will cause all your asynch calls to be synchronous and reload works. Your workling process does not need to be running (it isn’t being used anyway).

  44. Ram Avatar

    I guess I wasn’t quite clear. In the synchronous mode, I have to restart my mongrels each time I make a change, for the change to take effect (even in development mode).
    I don’t have the starling / worklings running at all. Anyway, I quick-fixed it by creating a class Background with an appropriate method_missing and my workers are subclasses of this class. Seems to have the behavior I want for now – though I am sure I am missing something.


  45. Ram Avatar

    I have the line commented out in development.rb, and I am running in the synchronous mode. Thanks for the tip btw. But, I need to manually restart server each time I make a change to my background workers. Are you able to make changes to your background workers, and have the changes show up without restarting (in the synchronous mode)?Thanks.

    BTW: Your blog is worth its weight in gold.

  46. Ram Avatar

    …. the weight includes the weight of hardware of course 😉

  47. Dave Avatar

    Yes, when running in synchronous mode, the workling process is not used. Everything runs on the context of your main application. If you have to restart your workling process to see the changes, then you are not running in synchronous mode.

  48. Dave Avatar

    Interesting. Do you have:
    config.cache_classes = false

    in your config/environments/development.rb file?

    When running in synchronous mode, your worker classes should be exactly like any other classes and should be reloaded automatically. I haven’t had this issue, but I tend to do the majority of my development and test in the console anyway.

    Wish I could be more help, but if you have cache_classes = false your worker classes should reload automatically. Do other classes reload for you, or is it only the worker classes?

  49. Ram Avatar

    config.cache_classes is false and rest of my models, controllers don’t require reloads. I even tried putting require_dependency in development.rb for my worker classes. Anyway, my current solution works – I basically have AsyncWorker < MyCustomBG < Workling::Base
    MyCustomBG has method_missing defined inside a class_eval based on the definition of Workling::Remote.dispatcher
    It’s a hack – but works for now 🙂

    Thanks a lot!

  50. Aaron Gibralter Avatar
    Aaron Gibralter

    Are there any comprehensive resources on getting starling/workling up and running in a multi-stage environments?
    I’m still really confused as to what exactly is necessary for starling/workling. Do I need to set up a memcached server or is ‘$sudo gem install starling’ enough?

    What capistrano tasks should I have? Should starling be restarted ever? When should workling be restarted? If I ‘$cap deploy’ what happens to the current workling processes?

    Thanks so much in advance!

  51. Dave Avatar

    I haven’t seen too many (outside of this series).
    Regarding your other questions, let me have go.

    The only external gems necessary for workling/starling are starling itself and the memcache-client. The memcached server is not required unless you use it for other things. One of the nice things about starling is that it is built on the memcached interface, and that means it is available from many languages and platforms. Plus, since so many apps use memcached for caching, it’s nice not to have yet another protocol to deal with.

    Other than the occasional reboot for other reasons, I have never restarted starling or had it go into any form of bad state. It’s been a rock. Even though I do have cap tasks for start/stop/restart of starling, those tasks are not part of any recipes.

    I restart workling whenever I restart my application. See part 3 ( for a description of my cap tasks.

    Now, there is one minor issue with this scheme that I haven’t got a good fix for yet. If you do stop the workling process, it may kill a worker in the middle of a task. This is due to how daemons is implemented (workling uses daemons). I’ve been in touch with the author to figure out a safe way to kill the process, but we haven’t made much progress. What I want is a way to signal the pollers to exit their loops, but daemons makes that difficult. I suppose you could stop the application, wait for workling to get through everything in the queues, restart workling, then start your app. I haven’t got to the point where this is a real problem yet, but I could see it coming.

  52. Aaron Gibralter Avatar
    Aaron Gibralter

    Hi Dave,
    Thank you for the response! Sorry I was a bit trigger happy with that post — I should have read the rest of your blog first :).

    Anyway, as for the last point there (restarting workling), might it be possible to have each started instance of workling use a unique starling queue (perhaps prefixed by the timestamp of when workling starts up). That way, you can “signal” the workling process to shut down (set some sort of class variable?), and it will once it has exhausted its own starling queue. Plus, this would mean that for a new deploy, you could start a new workling right away to work with the new code. Just a few thoughts I guess… perhaps you’ve already considered them.

  53. Dave Avatar

    I hadn’t thought of using a special queue because I was concerned that if there are many methods (queues) in a worker, it might take a while to get to the stop message. I was working through some form of signaling mechanism so the process isn’t just killed when you stop it. I have some of the code in there already to stop things, but daemon coding is not my area of expertise so I wasn’t able to spend the time necessary to get the signal to work correctly.
    If you are up for suggesting a working scheme, please feel free to do so.

  54. Dave Avatar

    No, you’re not missing anything. Certainly, if I was changing the workers then I would want them to empty out their queues before updating. In that case, I would do:
    1. Stop my application to prevent new messages
    2. run stats on starling to see how many items are left to process
    3. When all queues are empty, restart workling
    4. Start my application

    I suppose to be safest you would want this automated in the workling shutdown process. However, if you’re queues tend to be long, and you haven’t changed a worker, then all you really need to do is stop all the workling threads in between a message. The ultimate would be to have both options available.

  55. Aaron Gibralter Avatar
    Aaron Gibralter

    Wouldn’t you want a worker to finish off all of its queues before dying for consistency? For instance, if you deploy a new version of a site which changes the interaction between the app and the workers, a new worker picking up on an old queue might break things. Of course this can lead to memory strain when the queues are long and you’re deploying a new app (new workers + old workers finishing off queues), but it seems like the only safe option besides stopping the queues early.
    Perhaps I’m missing something fundamental about workling/starling?

  56. Raj Avatar

    Hi Dave,
    we are facing an issue while updating a model from workling.the issue is,i have a counter column in a table and i increment this from controller and decrement from workling.This counter is used for controlling number of workling to be triggered.
    The table in DB is showing the decremented value but still controller could not see the change in counter by the workling.Is this the expected behaviour of workling.
    Is there any way to access the changes made to model (db table) by the controller.

    Thanks in advance Kevin.

  57. Dave Avatar

    This is a pretty common Rails issues. When you have multiple instances of a model (one in the app and one in the workling in your case), and one instance saves, the other will not see the change. You have a couple of options.
    1. Make sure to reload the model in your controller. This may work, but timing will be a problem. What if the workling is delayed?

    2. Use a transaction and lock the model. Just be careful with locking multiple models, you could end up with a deadlock. If you do modify multiple models, make sure you always access them in the same order.

  58. Sai Emrys Avatar

    Hey all.
    I was reading through this while trying to figure out some issues w/ workling & starling & memcached and wrote ’em up on my blog here:

    Hope it helps some of you. 😉

    – Sai

  59. Dave Avatar

    Hard to say what is happening without more information. Note, if you are setting up a mailer worker, I suggest trying the following:
    create lib/asynch_mail.rb:

    # makes an actionmailer class queue its emails into a workling queue instead of sending them sycnhronously
    # To make a mailer use a queue simply include it into the class like this:
    # class MyMailer < ActionMailer::Base
    #   include AsynchMail
    # end
    # From now on all MyMailer.deliver_whatever_email calls create an entry in the MailerWorker.deliver_mail
    # corresponding queue and should be processed by a worker. If you still want to deliver mail sycnhronously
    # add a bang to the method call: MyMailer.deliver_whatever_email!
    module AsynchMail
      def self.included(base)
        base.class_eval do
          class << self
            alias_method :orig_method_missing, :method_missing
            def method_missing(method_symbol, *parameters)#:nodoc:
              case method_symbol.id2name
              when /^deliver_([_a-z]\w*)\!/
                orig_method_missing(method_symbol, *parameters)
              when /^deliver_([_a-z]\w*)/
                queue_mail($1, *parameters)
                orig_method_missing(method_symbol, *parameters)
            def queue_mail(method, *parameters)
              mail = self.send("create_#{method}", *parameters)
              MailerWorker.send("asynch_deliver_mail", :class =>, :mail => mail)

    Create app/workers/mailer_worker.rb:

    # =Mailer_worker
    # Handle all email tasks in the background
    class MailerWorker < Workling::Base
      def deliver_mail(options)

    Now, include AsynchMail in your Mailer class, and all mail will automatically be asynchronous.

    If you want to track down your issue, please provide your worker code and an example of how you call it.

  60. Kurian Mathew Avatar
    Kurian Mathew

    I am facing a problem while working with Workling it was givng me some error like
    Calling MailingsWorker#send_mailing({:uid=>”mailings_workers:send_mailing:df38f0fb5bf5d647156535e4a6d71e19″, :email=>””})
    FAILED to process queue mailings_workers__send_mailing. # could not handle invocation of send_mailing with {:uid=>”mailings_workers:send_mailing:df38f0fb5bf5d647156535e4a6d71e19″, :email=>””}: wrong number of arguments (1 for 0).
    Can some one help me out
    Thanks in advance

  61. Sean McGilvray Avatar

    I was wanting to know if setting the sleep time is the time it waits between sending each message in the queue? I have a server that only allows 500 messages an hour and I want to limit certain messages that would go out the entire database.
    Thank you,

    Sean McGilvray

  62. Dave Avatar

    Sleep time is the time the workling poller waits between GETs of the starling queue. It does not affect sending messages. As an example, your application can make 1000 asynch_* calls, and each will immediately go onto the appropriate Starling queue. The Workling processor will cycle through each of the Worker classes, processing messages. The sleep will occur between each processing of a Worker class. Remember, a Worker class can have multiple methods, and those will not have a sleep between.
    Make sense?

  63. kevin lochner Avatar
    kevin lochner

    I’m seeing workling use 85MB on startup, which is about equal to the steady-state usage of my rails app. Is this typical?

  64. Dave Avatar

    Yes, this is typical. Essentially, the Workling background task is another instance of your Rails application, so it follows that it would be close to the same size (full rails app – Mongrel).

  65. Alexwebmaster Avatar

    Hello webmasterI would like to share with you a link to your site
    write me here

  66. Jason Avatar

    ok, i’ve been battling for over a week now with workling and it is almost working – my app successfully queues and processes items with workling for about an hour before dying with the following error:
    /opt/ruby-enterprise-1.8.6-20090113/lib/ruby/gems/1.8/gems/rails-2.2.2/lib/commands/runner.rb:47: /home/jasonsapp/app/releases/20090324222750/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:77:in `method_missing’: MemCache::MemCacheError – Broken pipe (Workling::WorklingConnectionError)
    from /home/jasonsapp/app/releases/20090324222750/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:50:in `request’
    from /home/jasonsapp/app/releases/20090324222750/vendor/plugins/workling/lib/workling/remote/runners/client_runner.rb:38:in `run’
    from /home/jasonsapp/app/releases/20090324222750/vendor/plugins/workling/lib/workling/remote.rb:38:in `run’
    from /home/jasonsapp/app/releases/20090324222750/vendor/plugins/workling/lib/workling/base.rb:53:in `method_missing’
    from /home/jasonsapp/app/releases/20090324222750/app/models/feed_parser.rb:138:in `parse_feed’
    from /home/jasonsapp/app/releases/20090324222750/app/models/feed_parser.rb:112:in `each’
    from /home/jasonsapp/app/releases/20090324222750/app/models/feed_parser.rb:112:in `parse_feed’
    from /home/jasonsapp/app/releases/20090324222750/app/models/feed_parser.rb:210:in `update’
    from (eval):1
    from /opt/ruby-enterprise-1.8.6-20090113/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `eval’
    from /opt/ruby-enterprise-1.8.6-20090113/lib/ruby/gems/1.8/gems/rails-2.2.2/lib/commands/runner.rb:47
    from /opt/ruby-enterprise-1.8.6-20090113/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require’
    from /opt/ruby-enterprise-1.8.6-20090113/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require’
    from script/runner:3

    Any suggestions would be more then welcome, this has been really frustrating =]

  67. Dave Avatar

    This looks to be that workling lost its connection to starling. Is starling still running? Did you run out of memory? Occasionally, my worker code barfs, but I haven’t touched starling on any of my machines in months – never had a single issue with it.
    How large are your messages? How long does each worker method take to run?

  68. Dave Avatar

    Yes, should should be able to set :multiple => true and run more than one of your worklings. There will be no issues with trying to pull from the queue with multiple processes and threads. The Starling queue is designed for that.

  69. Mark Avatar

    Firstly, thanks Dave for your great blog posts and even more for staying on top of comments!
    My starling queue is growing faster than I can process it. I am a bit confused on workling classes vs methods. In a previous comment you said:

    “The Workling processor will cycle through each of the Worker classes, processing messages. The sleep will occur between each processing of a Worker class. Remember, a Worker class can have multiple methods, and those will not have a sleep between.”

    Right now I have 1 workling class (my_worker.rb) with 6 methods in it (def do_something) that each take a few seconds to complete.

    If I were to put each method in its own worker class, it would create 6 threads. Would sleep then have a bigger impact? I suppose I’ll just put sleep to 0.001 so its a nonfactor, but it would still be nice to understand 🙂

  70. Dave Avatar

    Each worker class will get its own thread. Processing proceeds as follows:
    1. call method1
    2. call method2
    3. call method3
    3. sleep (if defined and no messages in queue)
    4. repeat

    If you don’t setup a sleep, there will not be any. Note also, that things will only sleep if there is nothing in the queue. If there are messages in the queue, processing will continue immediately. I’m not at my normal computer so I can’t check that, but that’s how I remember it.

    You have a couple of options. First, you can run multiple instances of your workling process. You can run as many as you you can fit on one server or multiple. Second, you can split things up by worker class, and that will give a separate thread per worker class like you mentioned. Depending on what your worker methods are doing, some combination of both schemes may work. It all depends on the nature of what your methods are doing.

    What I normally do is split up long running processes into their own workers. If there are a lot of quick utility methods, then I can combine them into one worker.

    Let me know if I can help more.

  71. Mark Avatar

    Wow, thanks so much for the quick response.
    You are right, if there are messages in the queue, it does not sleep between class calls. So that isn’t a factor when queues are growing too fast.

    Thats excellent advice about my two options. It does sound like I should use a mixture of the them both.

    For multiple instances, is it really as simple as setting :multiple => true in workling_client.rb and starting up a few instances? There won’t be any issues with two instances trying to pull from the a queue at the same time?

  72. Infinite Scaling - Part 2 Avatar

    […] any others. On top of all those considerations, I needed background processing, and found a great article on using Starling and Workling in Rails. All that considered, it really doesn’t matter what […]

  73. Torsten Avatar

    Dave –
    great info on Starling/Workling. Do you happen to know if there’s a safe way within the workling worker to get additional queued messages for the worker instead of having workling call the worker for each message separately?

    We have a scenario where we’d like to rather combine all queued messages for a worker so the worker can process them all at once instead of separately each time workling calls the worker.

    Thx a bunch!

  74. Dave Avatar

    I can think of a few reasons why you might want to do that, but this is not what Workling was designed for. Do you want the worker to drain the queue each time? This might not work too well because Workling tends to be pretty quick.
    To do what you want to do, you would need to reproduce the functionality in one of the Workling::Remote::Invoker classes, but I would not recommend it — not very DRY.

    You could also have your worker maintain a list in memory, disk, or db. Then, process the list on some schedule.

  75. Workling Timing Issue » Big Dave’s Blog Avatar

    […] you probably noticed already, I use Workling a lot, and I wrote about it a few times (Part 1, Part 2, Part 3). One minor gotcha to be aware of is that you need to make sure you handle it when […]

  76. Checkout Hoptoad by thoughtbot, inc. » Big Dave’s Blog Avatar

    […] you have read anything I’ve done before, you know I do a lot of asynchronous processing with Workling. By default, Hoptoad will not log errors from anything outside of your controllers. Fear not. […]

  77. casualguy Avatar

    I’m having a hard time getting starling working in production mode, I’m hoping you can shed some light on this problem for me, this is how it goes:

    i start starling from the root of the application like so:

    starling -d -P tmp/pids/ -q log/

    then i start workling like this

    ./script/workling_client start -t

    the first time i ran this, it complained because there was no development database, so i created a development database, and that error went away when i restarted workling. but when i try to actually run an asynchronous process, i get this message in log/production.log

    Workling::QueueserverNotFoundError (config/workling.yml configured to connect to queue server on localhost:15151 for this environment. could not connect to queue server on this host:port. for starling users: pass starling the port with -p flag when starting it.

    so, i run

    sudo killall starling

    then restart starling from the root of the application like this:

    starling -d -P tmp/pids/ -q log/ -p 15151

    which seems to work fine, but then when i try to start workling again with this script/workling_client start -t, i get this message in the console

    /var/rails-apps/daisi/vendor/plugins/workling/lib/workling/clients/memcache_queue_client.rb:68:in `raise_unless_connected!’: config/workling.yml configured to connect to queue server on localhost:22122 for this environment. could not connect to queue server on this host:port. for starling users: pass starling the port with -p flag when starting it. If you don’t want to use Starling, then explicitly set Workling::Remote.dispatcher (see README for an example) (Workling::QueueserverNotFoundError)

    So, I tried changing the config/workling.yml file inside the workling plugin to make both production and development listen on 15151, that didn’t work, then i tried both of them on 22122, still no dice, so i tried a random port, but it still gives the exact same behaviour no matter what I put in the workling.yml file

    any help is much appreciated,



  78. Dave Avatar

    When you start up workling in production, you need to specify the rails env. For example:
    RAILS_ENV=production ./script/workling_client start -t

    Otherwise, workling will come up in the development (the default).

  79. casualguy Avatar

    thanx Dave, that was it! 🙂

  80. ricsrock Avatar

    Hi Dave,Thank you for all your posts on workling and starling.

    Starling and workling have been successfully working on my site for several months. All of a sudden background processes stopped. I can’t seem to find what’s going on.

    Each time I restart the workling client, the app will process one background and then nothing more. Nothing in logs.

    I know this isn’t alot to go on, but could you suggest what might be going on and what I might do to diagnose/fix?

    Thank you.

  81. Dave Avatar

    When you say nothing in the logs, are you including workling.log and workling.output? Both are in the shared/pids directory by default. Often you will find stuff in there that is not in your app’s logs.
    Have you changed anything else (starling, workling, your app)?

    When you say it doesn’t process any more, does workling exit, or is it just stuck?

    Have you tried stopping workling, stopping startling, starting starling, starting workling? Note, starling will persist your queue by default, so you should not loose anything. Actually, that might be something to check. The starling queue files (in /var/spool usually) will grow even with a get. If you have disk space issues, strange things could start happening.

    Another thing to try is add a bunch of logging code in ThreadedPoller::clazz_listen. At then, you might be able to see if your listener loop is doing something. Specifically, you want to make sure the loop continues to run even if there are no messages to process.

    In the end, I’ve never seen behavior like this, so without more info I’m not sure exactly what’s happening.

  82. ricsrock Avatar

    Thank you very much for your response. I still haven’t diagnosed what’s happening, but I’ve switched to rudeq and it’s working solidly (I may have lost some jobs in the process).
    Besides, I think there’s alot to like about using the db for a queue. I’d appreciate your thoughts on rudeq. Up side, down side?

    A pure guess, but my host switched from mongrel to passenger. I’m wondering how that might have affected starling.

  83. Dave Avatar

    I use Passenger for everything, so it’s not directly a Passenger problem. However, if your host changed from Mongrel to Passenger, you could have had a permission issue because I’m guessing that process users changed somewhere.
    Regarding rudeq, a db queue is perfectly fine, but like anything there are tradeoffs to putting your queue in the same db as your app. Depending on traffic, you could start to impact your application. The external queues will not have this issue (there are still other issues though!)

  84. tony braxx Avatar

    i setup starling and workling on production env successfully,i use workling and starling for sending emails. i run these lines to start starling and workling:

    starling -d -P tmp/pids/ -q log/ -p 15151

    export RAILS_ENV=production && ruby script/workling_client start

    every things fine until one or half a day but after that i get the following error:
    Workling::QueueserverNotFoundError (config/workling.yml configured to connect to queue server on localhost:15151 for this environment. could not connect to queue server on this host:port. for starling users: pass starling the port with -p flag when starting it.

    restart of Starling would always solve the problem. i can’t do it every day!!!

    any help is much appreciated,


  85. Dave Avatar

    When you get the error, does processing stop forever, or does it start up again after a bit? I’ve occasionally seen this error in my logs, but it never stops things.

  86. tony braxx Avatar
    tony braxx

    i get the following error too, and after that processing stop forever. restart of Starling or workling would always solve the problem.
    /memcache_queue_client.rb:68:in `raise_unless_connected!’: config/workling.yml configured to connect to queue server on localhost:22122 for this environment. could not connect to queue server on this host:port. for starling users: pass starling the port with -p flag when starting it. If you don’t want to use Starling, then explicitly set Workling::Remote.dispatcher (see README for an example) (Workling::QueueserverNotFoundError)

  87. Dave Avatar

    This just started happening after running for a while? Did some permissions change? Disk space available? How does memory look? The fact that it starts up for a while and then has a problem does not make sense to me. Let me look around a bit.
    You are running the latest starling, memcache-client, and workling. Right?

  88. tony braxx Avatar

    yes, i am using starling 0.9.8 and memcached-client 1.7.6 and last version of workling. i am using the share server on dreamhost by the way i don’t change permissions.according to your last reply i think maybe lack of memory cause the problem.
    my site is in test progress and don’t have so many visitors, i don’t know change the server will solve the problem or not.

  89. Dave Avatar

    Memory could be a problem if starling or workling are swapping. The swapping will cause a delay in connection and could easily result in an unable to connect error like you’re getting. That could explain why it works for a while and then stops.

  90. Mirko Avatar

    I am trying to use cron to fire the following workling item

    #!/usr/bin/env ruby

    require “rubygems”
    require “memcache”

    host = “localhost”
    # port = 22122
    port = 15151
    starling =“#{host}:#{port}”)

    case ARGV[0]
    when ‘user_warning’
    # starling.set ‘PhotoWorker.asynch_send_hour_warning’, {}
    starling.set ‘photo_worker__send_hour_warning’, {}
    when ‘contacts_warning’
    starling.set ‘photo_worker__send_contacts_warning’, {}

    But never seems to run. PhotoWork is my worker class am l doing something wrong?

    p.s works in dev just not production

  91. Dave Avatar

    When you fire off the task with cron are you setting RAILS_ENV=production for the script? That’s usually what is missing if it works in development.
    I usually add an:

    export RAILS=production

    for the bash profile of the user I run cron and/or rails processes for. This way I know the environment will be set correctly when I execute cron or rake tasks.

  92. Dave Avatar

    One more thing, rather than explicitly using memcache, you should be able to call the asynch method directly:

    That should add the correct message to your queue. The downside, however, of this approach is that you will need to load your rails environment.

  93. Jackson Blacklock Avatar
    Jackson Blacklock

    I am trying to delete the starling queues:


    >> starling.available_queues
    => [“vimeo_background_upload_workers__upload_to_vimeo”, “queue_paperclip_background_upload_workers_upload_to_amazon_s3”, “queue_paperclip_background_upload_workers__upload_to_amazon_s3”, “paperclip_background_upload_workers__upload_to_amazon_s3”]

    MemCache::MemCacheError: bad command line format

    Is there another way to delete the queue item? I have a pretty nasty back log that is starting to eat up my memory. If I kill the PID and restart the worker it seems to cache the queue and restarts everything.

    Any help would be greatly appreciated!



  94. Dave Avatar

    To terminate a queue with extreme prejudice, you can stop starling and delete the queue file. By default, the queue files are in /var/spool/starling. The filename will equal the name of the queue.
    BTW, these files are how the memory queue is rebuilt when you restart starling.

  95. Anooj Avatar

    Hi Dave,
    I am working on getting starling/workling combo setup for my app when I ran into a this problem. I am using a Windows machine for development and linux machines host my deployments. When I start starling and set a queue to something on the dev machine, starling gives me an error because the path it creates automatically for my queue( /var/spool/starling/.. ) has a ‘/’ instead of a ‘\’. I tried setting the path explicitly when starting starling using:

    starling -d -q C:\..\..

    But still its doing the same thing. Anything I can do to get this to do the right thing??

    Thanks in advance.

  96. Dave Avatar

    Using -q is the only thing I can suggest. The problem is that I’m not sure Starling even works on Windows. I know it hasn’t been tested much.
    See this:

    Seth knows his stuff.

  97. Luke Avatar

    So I have this in

    ## config/workling.yml:
    listens_on: localhost:15151

    listens_on: localhost:22322

    listens_on: localhost:12345

    listens_on: #my_remote_ip_here#:22522

    listens_on: localhost:12345

    ## end of config/workling.yml

    I get the starling and workling set up on my staging server at #remote_ip#, and I get this outcome:

    # this succeeds:
    script/console development_remote
    >> r = MyWorkerClass.asynch_count_to_a_hundred_million
    => “”my_worker_classes:count_to_a_hundred_million:7edb0e25b721ea725441b79d8ebb4d58”

    # and it goes to the queue, and I see the processor spike as ruby counts to 100,000,000 on my staging server

    … all good so far, but then

    ## this fails:
    script/server RAILS_ENV=development_remote
    # when I browse to a controller function which somewhere has
    # MyWorkerClass.asynch_count_to_a_hundred_million
    # I get the following error:
    “config/workling.yml configured to connect to queue server on localhost:22322 for this environment. could not connect to queue server on this host:port. for starling users: pass starling the port with -p flag when starting it. If you don’t want to use Starling, then explicitly set Workling::Remote.dispatcher (see README for an example)”

    So it looks like it’s not loading the proper set of details from workling.yml. How can I fix this?

    – Luke

  98. Dave Avatar

    Try:RAILS_ENV=development_remote script/server

    If it’s dropping back to development, it could be that it’s not properly picking up the RAILS_ENV.

  99. Luke Avatar

    Oh, I’m an idiot. Apparently starting a rails environment is now:
    script/server -e environment_name

    and NOT:

    script/server RAILS_ENV=environment_name

  100. Dave Avatar

    Doh! I did the same thing. I always forget about script/server being different from script/console.

  101. Background Jobs in Ruby on Rails « 4 Lines of Code Avatar

    […] – written by Blaine Cook at Twitter – Starling is a Message Queue Server based on MemCached – written in Ruby – stores jobs in memory (message queue) – Ruby client: for instance Workling – documentation: some good tutorials, for example the railscast about starling and workling or this blog post about starling […]

  102. Pierre Avatar

    Hi Dave,
    Every time I look for some info on Starling/Workling, I end up on your blog :-). It seems it is the most expert place to find advice on these two guys! So I thought I would ask my question here.

    On a user request, my app has to run a fairly complex optimization algorithm. In fact, it is so time consuming that I have to run it in the background and “slice” it into smaller pieces that talk to each other. I have done it with Starling and Workling:
    the algorithm is run as workers, and they communicate with each other passing messages with Starling.

    It works great when I have only one Mongrel, but it starts failing when I deploy and use Passenger. I get random MemCache errors:
    – MemCache::MemCacheError: No connection to server (localhost:11211 DEAD (Timeout::Error: IO timeout), will retry at xxx and
    – MemCache::MemCacheError: IO timeout

    They are really hard to track down since they seem to be random (i.e. sometimes things work great, but the next time they fail).

    I know that Passenger is a bit specific when it spawns processes and it requires a specific setup. But I am running on Conservative mode to be “safe” – still, it fails.

    Since my understanding is you are using the same setup (Starling/Workling/Passenger), have you had any problems? Are you using the memcache-client gem with Starling, or the memcached one?

    Thanks a lot!

  103. Dave Avatar

    Pierre, I recall seeing something like this, but I can’t remember the details. Unfortunately, my computer is not available until Thursday. I will take a look then.

  104. Pierre Avatar

    Thanks a lot Dave. If you find out and could share some hints on how you managed to overcome this problem it would be fantastic – I am definitely struggling with this, and there is not that much info around…
    I came across this fairly recent post:
    where they say memcache-client just doesn’t work with Passenger and I was getting really concerned…

  105. Dave Avatar

    I can’t seem to find anything specific in my code. I processed several million messages with starling, workling, and passenger without a problem. The only thing I can think of is to make sure the :multithread arg is true when you init memcache-client. The issue here seems to be with starling which means that passenger may not have anyhing to do with it. Your workers are running in the workling processes, not passenger.
    Are you getting the errors from your workers making the calls to each other vs. passenger app trying to add a new workling message to the queue?

  106. Pierre Avatar

    Thanks a lot Dave – you pointed me in the right direction. My problem had nothing to do with Passenger. It took me a while to figure this out, but I think it was just a consequence of the load (I was testing with multiple simultaneous calls to my app). Indeed, I didn’t realize that the default timeout for memcache was 0.5s. Since I am making quite a lot of calls to the queue, some could not get completed in time and I was missing the respective message. When I set a long memcache timeout, the problem disappears.
    I have to say it is kind of weird, since I have to set it really long (more than 10s, or even leaving it to nil). Something is blocking somewhere, I need to investigate.

    Anyway, thank you very much for your help on this.

  107. Better Diet Products Avatar

    Found your web sites on AskJeeves, great information, but the site looks awkward in doing my browser setup, but gets results fine in IE. proceed figure.

  108. Lee Avatar

    Hi Dave,
    I know this is a very late comment, but I’ll shoot anyway.

    What if you want to run some tests that incorporate workling on the same box you develop on? Or what if your development box is the same as your production box (for perhaps a different branch of your code)?

    It seems workling only respects RAILS_ENV and there is no way I can see to get this to play nicely.

    I can do something like manually set RAILS_ENV=test in my tests, then set it back to RAILS_ENV=development afterwards, but that seems really hacky and far less than ideal.

    any suggestions would be appreciated.



  109. Dave Avatar

    I’m not sure what you are trying to accomplish by doing this? If you set RAILS_ENV=test, no queue will be used and your workers will be called synchronously. This will allow for testing to happen normally. You can also set your environments to use Asynch calls as well. However, if you do that, then your tests will only ensure that a message will be put on the queue. There will be no no way to know if anything happened as a result of the message.

  110. Nancy Snyder Avatar

    I am with a staffing agency looking for a VP level person. Location is RI. Was hoping you may know someone to fit the role. A few particulars are below.

    #1 management skills-someone who has managed up to 20

    #2 relevant technical skills/knowledge—PHP
    Someone who has/can code PHP
    Have they done technical development? Can they hold their own in a group of talented PHP developers?

    The rest—social media and industry background is icing on the cake
    Having built SAAS PHP architecture is a plus/helpful

  111. Clerc Hydroscaph Limited Edition Chronograph Watch Review | myblogaboutwatches Avatar

    […] watches used rolex cartier cartier trinity ring b4106200 cartier 593120 Click here to cancel […]

  112. The Diagono P10s Are Finding Owners – (5 Replies) | Hello Watches! Avatar

    […] luminor marina panerai marina panerai orologi panerai pam 0063 panerai prezzi panerai price list panerai price list 2011 panerai radiomir Filed under: Uncategorized Leave a comment Comments (0) Trackbacks (0) ( […]

Leave a Reply

Your email address will not be published. Required fields are marked *