We are going to write a feature on my system that will require a very massive user http post and http response, probably millions of requests/response with just some bytes each ( sending and getting with ajax and json )
It seems silly save such bytes, but I want to be prepared for the worst.
Any directions? just post what you think, let's share our thoughts
preguntado el 10 de marzo de 12 a las 02:03
En Java, puede utilizar el
java.util.zip package which should provide you with compress-/decompress methods.
0x41 0x00 0x41 0x00 0x42 0x00 0x43 0x00 0x43 0x00 0x43 0x00
compressed by leaving out the 0x00 :
0x41 0x41 0x42 0x43 0x43 0x43
even more by leaving out doubles:
2 0x41 1 0x42 3 0x43
Even though it doesn't look very impressive right now, there might be one or two requests that could profit from this compression. Although it is really important for the algorithm to be very effective and sufficient. Since you are talking about 'millions' of requests. One big request could profit from this compression, but many small requests could be very inefficient.
Sorry I can't provide you with a complete solution, but maybe this brings you a little closer.
Is a server-level implementation of zlib compression not good enough for your use case? That'd be the most simple and reliable way to get compression working. All the most common web servers and browsers support zlib compression out of the box.
Performance questions require you to try and measure to get any reasonable answer.
In your case I would carefully look at raw message and see where bytes are - I bet that for tiny data packets most of the bytes would be in headers, so compression of content will give you no benefit. It is your system - look at your requests and see where you can get decrease size of packets.
Note that often you need to send user's authentication with the request - as result your request will have fixed size, usually non-compressible chunk of data in it.
You could install a compression filter and have that handle it for you: