Plugin Management
Elasticsearch-Head
The latest version of Elasticsearch (5.0 and above) no longer supports the installation of the head plugin using the internal elasticsearch-plugin command. The built-in head plugin is essentially a graphical management tool formed by calling the es service REST API. Therefore, the head management platform network and the es connection address entered should be accessible.
Hadoop HDFS Repository Plugin
* Define the hdfs repository configuration via the REST API:
PUT /_snapshot/my_hdfs_repository
{
"type": "hdfs",
"settings": {
"uri": "hdfs://namenode:8020/",
"path": "elasticsearch/respositories/my_hdfs_repository"
}
}
# uri: the address of hdfs, for example: "hdfs://<host>:<port>/"
# path: the path of the file in the file system where data is stored/loaded, for example: "path/to/file"
After creation, you can get hdfs repository information:
GET /_snapshot/my_hdfs_repository
Example
{
"my_hdfs_repository": {
"type": "hdfs",
"settings": {
"path": "elasticsearch/respositories/my_hdfs_repository",
"uri": "hdfs://namenode:8020/"
}
}
}
* Create a snapshot:
PUT /_snapshot/my_hdfs_repository/snapshot_1
{
"indices": "index_1,index_2",
"ignore_unavailable": true,
"include_global_state": false
}
After creation, you can get snapshot information:
GET /_snapshot/my_hdfs_repository/snapshot_1
Example
{
"snapshots": [
{
"snapshot": "snapshot_1",
"uuid": "yr9T6jtLTCeVFRoNGN-9Lw",
"version_id": 5050199,
"version": "5.5.1",
"indices": [
".kibana",
"test_index"
],
"state": "SUCCESS",
"start_time": "2018-02-01T08:13:26.128Z",
"start_time_in_millis": 1517472806128,
"end_time": "2018-02-01T08:13:28.870Z",
"end_time_in_millis": 1517472808870,
"duration_in_millis": 2742,
"failures": [],
"shards": {
"total": 6,
"failed": 0,
"successful": 6
}
}
]
}
* Delete the snapshot:
DELETE /_snapshot/my_hdfs_repository/snapshot_1
* Restore the snapshot:
POST /_snapshot/my_hdfs_repository/snapshot_1/_restore
Note: If the index to be restored by the snapshot already exists in the cluster and is in an open state, you need to first close the index using the `_close` API, for example:
POST /.kibana/_close
For more detailed plugin usage, please refer to Hadoop HDFS Repository Plugin
IK Analysis Plugin
Custom Word Segmentation Dictionary Operation
This can be configured in the IK configuration file as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>IK Analyzer Extended Configuration</comment>
<!--Users can configure their own extended dictionary here -->
<entry key="ext_dict">custom/mydict.dic;custom/single_word_low_freq.dic</entry>
<!--Users can configure their own extended stopword dictionary here-->
<entry key="ext_stopwords">custom/ext_stopword.dic</entry>
<!--Users can configure remote extended dictionary here -->
<entry key="remote_ext_dict">location</entry>
<!--Users can configure remote extended stopword dictionary here-->
<entry key="remote_ext_stopwords">http://xxx.com/xxx.dic</entry>
</properties>
The IK word segmentation supports local custom dictionaries and supports remote hot update dictionaries. The UES is updated by saving the dictionary information in a specific index. The specific operation process in Kibana is given below, in which the API must be kept exactly the same.
* Local Extended Dictionary
PUT /custom_ik/analyzer/1
{
"ext_dict": [
]
}
Example:
PUT /custom_ik/analyzer/1
{
"ext_dict": [
"People's Republic of China",
"United States of America",
"The United Kingdom of Great Britain and Northern Ireland"
]
}
* Local Extended Stop Word Dictionary
PUT /custom_ik/analyzer/2
{
"ext_stopwords": [
]
}
Example:
PUT /custom_ik/analyzer/2
{
"ext_stopwords": [
"People's Republic of China",
"United States of America",
"The United Kingdom of Great Britain and Northern Ireland"
]
}
* Remote Extended Dictionary
PUT /custom_ik/analyzer/3
{
"remote_ext_dict": ""
}
Example:
PUT /custom_ik/analyzer/3
{
"remote_ext_dict": "http://localhost:8080/my_dict.dic"
}
* Remote Extended Stop Word Dictionary
PUT /custom_ik/analyzer/4
{
"remote_ext_stopwords": ""
}
Example:
PUT /custom_ik/analyzer/4
{
"remote_ext_stopwords": "http://localhost:8080/my_stopwords.dic"
}
Check if the index data is successful
GET /custom_ik/analyzer/1
GET /custom_ik/analyzer/2
GET /custom_ik/analyzer/3
GET /custom_ik/analyzer/4
* The last step, go to the console and click on the word segmentation dictionary item that needs to be updated to complete the update of the configuration file, and then restart the cluster node to take effect.