NOSQL – Should I use Redis to store a lot of binary files?

I need to store a large number of binary files (10 – 20 TB, each file ranges from 512 kb to 100 MB).

I need to know whether Redis will work on my system.
I need the following attributes in my system:

>High availability
>Failover
Sharding

I plan to use a lot of commodity hardware Possibly reduce costs. Please suggest the pros and cons of using Redis to build such a system. I also care about the high-pressure head requirements of Redis.

I don’t use Redis for tasks like this. Other products will be more suitable for IMO.

Redis is an in-memory data store. If you want to store 10-20 TB of data, you need 10-20 TB of RAM, which is expensive. In addition, the memory allocator is optimized for small objects rather than large objects. You may have to cut your files into various small pieces, which is not convenient.

Redis does not provide temporary solutions for HA and failover. Provides master/slave replication (and works very well), but does not support the automation of this failover. The client must be smart enough to switch to the correct server. The server side (but this is unspecified) must switch the roles between the master node and the slave node in a reliable way. In other words, Redis only provides a do-it-yourself HA/failover solution.

Sharding must be implemented on the client side (like using memcached). Some customer support, but not all. The fastest client (hiredis) does not. In any case, the rebalancing must be implemented on Redis. Redis Cluster, which should support this fragmentation function, is not ready yet.

I suggest some other solutions. MongoDB and GridFS may be a possibility. Hadoop with HDFS is another. If you like cutting-edge projects, you might want to try Elliptics Network.

I need to store a large number of binary files (10-20 TB, each file ranges from 512 kb to 100 MB).

I need to know whether Redis will work on my system.
I need the following attributes in my system:

>High availability
>Failover
Sharding

I plan to use a lot of commodity hardware Possibly reduce costs. Please suggest the pros and cons of using Redis to build such a system. I also care about the high-pressure head requirements of Redis.

I will not use Redis for tasks like this. Other products will be more suitable for IMO.

Redis is an in-memory data store. If you want to store 10-20 TB of data, you need 10-20 TB of RAM, which is expensive. In addition, the memory allocator is optimized for small objects rather than large objects. You may have to cut your files into various small pieces, which is not convenient.

Redis does not provide temporary solutions for HA and failover. Provides master/slave replication (and works very well), but does not support the automation of this failover. The client must be smart enough to switch to the correct server. The server side (but this is unspecified) must switch the roles between the master node and the slave node in a reliable way. In other words, Redis only provides a do-it-yourself HA/failover solution.

Sharding must be implemented on the client side (like using memcached). Some customer support, but not all. The fastest client (hiredis) does not. In any case, the rebalancing must be implemented on Redis. Redis Cluster, which should support this fragmentation function, is not ready yet.

I suggest some other solutions. MongoDB and GridFS may be a possibility. Hadoop with HDFS is another. If you like cutting-edge projects, you might want to try Elliptics Network.

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 636 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.