Memory leak in the new VM?

Tags: #<Tag:0x00007f29feeeadc0> #<Tag:0x00007f29feeeac08>


I have a program that runs against a MB server. With the old VM it was always successfully completing, taking around 2-3 days of non-stop processing. With the new VM it fails within 15 min to 3 hours with 2GB of memory, or within 12-24 hours with 4GB of memory.

The program is making consecutive requests like ‘/artist/mbid’ and ‘/release/mbid’, etc., and every request is made only after the previous request has been successfully completed. These are simple, “standard” requests. Tens of thousands of such requests succeed, and then the VM fails with “out of memory” error. This is the latest:

Out of memory: Kill process 2524 (java) score 162 or sacrifice child
Killed process 2524 (java) total-vm:3675056kB, anon-rss:655756kB, file-rss:0kB
Out of memory: Kill process 3165 (redis-server) score 161 or sacrifice child
Killed process 3165 (redis-server) total-vm:684388kB, anon-rss:653556kB, file-rss:0kB

I should also note that I have another program that takes a few days to complete. It only makes calls against /ws/2/ service, and it does not fail. The program that fails mixes calls to “/” with “/ws/2”, so I suspect some resources are not released/GCed related to “/” calls.


I should also note that one request seems particularly slow, and it is specific to the program that fails:


I don’t know if a similar request can be made against ‘/ws/2’ service - I would love that feature very much.