I recently worked on an application that took messages from internal applications and queued them up before sending them to an external vendor. The application was built because the messages were important enough that loosing any messages sent during vendor downtime was not an option. Since all applications involved were using rails it was an obvious choice to develop the queuing app in rails and use a REST api to transfer the data into the queue.
The queue app went together quite nicely and everything was going smoothly until it went live and updates were needed to fix some bugs. Since the queuing app was the method for handling downtime outside of the main application it wasn't safe for the queue to go down while the main application was still up. Out of necessity I quickly came up with a solution.
The final solution consists of running more than one mongrel cluster, preferably on different servers. Proxy to all of those clusters from the apache instance that is serving the main application. This provides redundancy as far as the mongrel clusters are concerned and if the main apps apache server is down the application won't be generating any messages to be queued.
To upgrade the queuing app without downtime upgrade the code behind one of the clusters and restart that cluster, repeating until all clusters have been upgraded. This works very nicely, but it does require careful development to ensure that each update is compatible with the previous version of the application.