ES has been red to restart unstable downtime

persistent (settings will also exist after restart) or transient (settings that will disappear after the entire cluster restarts).


< p>View the cluster status and the status of each indicator. If you find red, delete it if you don’t use it

GET /_cluster/health?level=indices

DELETE /.monitoring-kibana-6-2019.07.11/

< hr>

View all unreassigned shards. The shards must be averaged to each node

GET /_cat/shards?h=index,shard,prirep,state,unassigned.reason | grep UNASSIGNED

Check the reason for fragment allocation failure:

GET /_cluster/allocation/explain?pretty

Set delayed fragment reallocation to reduce restarting one cluster It is the pressure brought by the immediate reblance. So generally turn off redistribution when restarting:

PUT _cluster/settings
{
“persistent”: {
“cluster.routing.allocation.enable”: “primaries” ,
“cluster.routing.rebalance.enable”: “none”
}
}

PUT /_all/_settings
{
“settings”: {
“index.unassigned.node_left.delayed_timeout”: “15m”
}
}

#Dynamic setting of es index copy number
curl -XPUT ‘http://168.7.1.67:9200/log4j-emobilelog/_settings’ -d ‘{  
   “number_of_replicas” : 2  
}’  
  
#设置es不自动分配分片 
curl -XPUT’http://168.7.1.67:9200/log4j-emobilelog/_settings’ -d'{
“cluster.routing.allocation.disable_allocation”: true
}’

#Manual movement segment
curl -XPOST “http://168.7.1.67:9200/_cluster/reroute’ -d “commands”: [{
“move”: {
“index”: “log4j-emobilelog”,
“shard”: 0,
“from_node”: “es-0”,
“to_node”: “es-3”
>    }]  
}’  
  
#手动分配分片 
curl -XPOST “http://168.7.1.67:9200/_clu ster/reroute’ -d “commands”: [{
“allocate”: {
“index”: “.kibana”,
“shard”: 0, br” shard”: 0, ster/reroute’ -d node”: “es-2”,
}}
}]

Set the size of the recovery concurrency and per second: 
"cluster.routing.allocation.node_concurrent_recoveries": 100 , "indices.recovery.max_bytes_per_sec": "40mb"

Enable the crazy writing mode to disable refresh first
curl -XPUT localhost:9200/my_index/_settings- d'{“index”:{“refresh_interval”:-1}}’

Temporarily close the copy:

curl -XPUT’localhost:9200/my_index/_settings’ -d ‘
{
“index”: {
“number_of_replicas”: 1
}
}’


View the current thread pool , View current node information
curl -XGET’http://localhost:9200/_nodes/stats?pr etty’

curl -XGET’localhost:9200/_cat/nodes?h=name,ram.current,ram.percent,ram.max,fielddata.memory_size,query_cache.memory_size,request_cache.memory_size,percolate .memory_size,segments.memory,segments.index_writer_memory,segments.index_writer_max_memory,segments.version_map_memory,segments.fixed_bitset_memory,heap.current,heap.percent,heap.max,\&v’

curl -XPOST :9200/_cache/clear”


Notes when es node restarts:
1. Suspend the data writing program
(If conditions permit, the formal environment Generally not allowed, we write es if there is a problem, the data will be written back to es, so it can also be allowed!!!!! In this case, the entire es cluster will not need to be restarted)
2, shut down the cluster shard allocation
3, manually execute POST /_flush/synced
4, restart the node
5, restart the cluster shard allocation
6, wait for the recovery to complete, the cluster health status becomes green
7. Restart the data writing program

! ! ! The data field type without template is changeable and it is likely to be a drag on es

https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing-performance.html#_using_and_sizing_bulk_requests
Segment merging slows down writing data, there will be logs
now throttling indexing
The default is 20MB If ssd is recommended 100-200
PUT /_cluster/settings
{
“persistent”: {
“Indices.store.throttle.max_bytes_per_sec”: “100mb”
}
}
If you only enter data and do not perform index query, you can even turn this off (reopen and set it to merge)< br>PUT /_cluster/settings
{
“transient”: {
“indices.store.throttle.type”: “none”
}
}

Mechanical hard disk reduces disk io pressure method
(This setting will allow max_thread_count + 2 threads to operate on the disk at one time, so a setting of 1 will allow three threads.)
For SSDs, you can ignore this setting . The default is Math.min(3, Runtime.getRuntime().availableProcessors() / 2), which works well for SSD.

This is written in the configuration file elasticsearch.yml configuration file
Index.merge.scheduler.max_thread_count: 1

Finally, you can increase index.translog.flush_threshold_size f rom the default 512 MB to something larger, such as 1 GB.
! ! ! This can reduce disk pressure, but it will increase memory pressure.
This allows larger segments to accumulate in the translog before a flush occurs.
By letting larger segments build, you flush less often, and the larger segments merge less often.
All of this adds up to less disk I/O overhead and better indexing rates


Know which shard of which index I started to repair it manually, through the allocate allocation of reroute

curl-XPOST'{ESIP}:9200/_cluster/reroute'-d'{

"commands": [{
"allocate": {
"index": "eslog1",
"shard": 4,
"node": "es1",
"allow_primary": true
}
}
]
}‘https://www.cnblogs.com/seaspring/p/9322582.html

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 2383 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.