So far it seems best to “archive”/convert additional blobs regularly to block the blobs and then process them. Any suggestions?
If not, what alternative log storage would be suggested?
You can use Azure tables, but even if you use batch operations, you need more reading.
I have observed that when the blob is appended thousands of times or more, the reading speed from Azure Append Blob is very slow. Write. The import/append speed is very fast, but it takes more than one minute to read thousands of typical log blobs, each of which is a few KB in size, and the total size is a few MB! It takes a few milliseconds to read a standard blog or page blob of similar size. Is there a way to speed up the reading of additional blobs, that is, by flattening the internal structure?
So far it seems best to “archive”/convert additional blobs regularly to block the blobs and then process them. Any suggestions?
If not, what alternative log storage would be suggested?
Azure tables can be used, but even with batch operations, more reads are required.
I did switch to Azure Tables and the read performance is reasonable , 1.5K items are about 1 second, read in batch mode. Still, it is faster to read blocks or page blobs with the same content in milliseconds. If there is a way to attach to page blobs, it will be very effective. This can be done manually, so maybe Append (or other types of) Blob can be executed automatically in some future version. Append Blob is very complicated inside Deep dive in Append Blob, which may be the reason for the slow reading speed. p>