copy mycolfamily to'/root/mycolfamily.csv'
Request did not complete within rpc_timeout.
I am Run:
[cqlsh 3.1.6 | Cassandra 1.2.8 | CQL Specification 3.0.0 | Thrift Protocol 19.36.0]
How to increase the RPC timeout limit?
I try to add rpc_timeout_in_ms: 20000 (defalut is 10000) in my conf/cassandra.yaml file. But when restarting cassandra, I get:
[root@user ~]# null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=rpc_timeout_in_ms for JavaBean=org. apache.cassandra.config.Config@71bfc4fc; Unable to find property'rpc_timeout_in_ms' on class: org.apache.cassandra.config.Config
Invalid yaml; unable to start server. See log for stacktrace.
https://github.com/apache/cassandra/blob/trunk/bin/cqlsh#L1524
I am on production Do the same export. What I am doing is the following;
>Select * from the table, where timeuuid = someTimeuuid limit 10000
>Write the result set to a csv file w/>> mode
>Make the next choice based on the last timeuuid
You can pipe commands in cqlsh through the following cqlsh commands
echo “{$cql}”| /usr/bin/ cqlsh -u user -p password localhost 9160> FILE.CSV
I am trying to use CQL client from cassandra Export data. The column family contains about 100000 rows. When I use the COPY TO command to copy dta to a csv file, I get the following rpc_time out error.
copy mycolfamily to'/root/mycolfamily.csv'
Request did not complete within rpc_timeout.
I am running:
[cqlsh 3.1.6 | Cassandra 1.2.8 | CQL Specification 3.0.0 | Thrift Protocol 19.36.0]
How to increase the RPC timeout limit?
I try to add rpc_timeout_in_ms: 20000 (defalut is 10000) in my conf/cassandra.yaml file. But when restarting cassandra, I get:
[root@user ~]# null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=rpc_timeout_in_ms for JavaBean=org. apache.cassandra.config.Config@71bfc4fc; Unable to find property'rpc_timeout_in_ms' on class: org.apache.cassandra.config.Config
Invalid yaml; unable to start server. See log for stacktrace.
The COPY command currently uses a SELECT with LIMIT 99999999 to perform the same operation. Therefore, when the data grows, it will eventually enter a timeout state. This is the export function;
< /p>
https://github.com/apache/cassandra/blob/trunk/bin/cqlsh#L1524
I am doing the same export in production. What I am doing is the following Point;
>Select* from the table, where timeuuid = someTimeuuid limit 10000
>Write the result set to a csv file w/>>mode
>Make the next selection based on the last timeuuid
You can pipe commands in cqlsh with the following cqlsh commands
echo “{$cql}”| /usr/bin/cqlsh -u user -p password localhost 9160> FILE. CSV