I run passenger with nginx and some months ago I deleted an nginx log file that was using up my disk space, specifically /var/log/nginx/error.log.1, about 5GB in size, and then reconfigured logrotate to keep the log file sizes down. Everything was fine until days later the deleted file was somehow reclaimed by the Passenger processes (as revealed by lsof +L1). It appears the deleted file increases in size so apparently it's being written to. I'm not sure why it's wanting to write to error.log.1 either, since that's a rotated log file (although it's been so long I'm not sure if I renamed error.log to error.log.1 when I was deleting/moving things around and that may be somehow related to the problem) Restarting passenger via touch tmp/restart.txt didn't change lsof +L1 nor reclaim the disk space but restarting nginx did.
Now the really weird part is that the system has been rebooted since then and the problem still occurs. Some days or weeks pass and then suddenly the available disk space shrinks, I check lsof +L1 and there's the deleted file again. What on earth could be going on here? It would be interesting to know how this happened and helpful to know how I might be able to stop it from recurring. Thanks.
the log rotate conf looks like this:
create 640 root adm
[ ! -f /var/run/nginx.pid ] || kill -USR1
preguntado el 08 de noviembre de 11 a las 15:11
I know this is an old thread, but my guess is the nginx service restarted which freed up the lock on the deleted log file, for the 'suddenly free disk space shrinks' (if you delete a file which has an open handle to it, it will exhibit the behavior you describe, until that handle is released). It sounds like the logrotate function isn't causing nginx to release the handle?