2017-02-13 12 views
2

Я использовал elasticsearch 2.0 с флагами запуска для настройки publish_addres. Мне нужно, чтобы адрес публикации был настроен, потому что я хочу начать поиск elasticsearch в контейнере докера и получить доступ к нему извне. Таким образом, адрес публикации должен быть IP-адресом хоста докеров, что в моем случае 192.168.99.100. Я хочу, чтобы получить доступ к elasticsearch на порту 9201.Как настроить адрес публикации elasticsearch 5.0 с помощью флагов CLI?

docker run -d -p 9201:9201 --name elasticsearch_test elasticsearch:5.2-alpine elasticsearch -Enetwork.publish_host="192.168.99.100" -Ehttp.port="9201" 

который подобен старой команде

docker run -d -p 9201:9201 --name elasticsearch_test elasticsearch:2.4.1 elasticsearch -Des.network.publish_host="192.168.99.100" -Des.http.port="9201" 

Но когда я начинаю контейнер и смотреть в логах я не получаю публиковать адрес 192.168. 99.100: 9201, но 192.168.99.100:9300 и 172.17.0.2:9201. Как я могу заставить elasticsearch использовать мою комбинацию адреса и порта?

Заранее спасибо

Выход docker logs elasticsearch_test

[2017-02-13T09:17:03,095][INFO ][o.e.n.Node    ] [] initializing ... 
[2017-02-13T09:17:03,252][INFO ][o.e.e.NodeEnvironment ] [ntIFoHQ] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [1gb], net total_space [17.8gb], spins? [possibly], types [ext4] 
[2017-02-13T09:17:03,252][INFO ][o.e.e.NodeEnvironment ] [ntIFoHQ] heap size [1.9gb], compressed ordinary object pointers [true] 
[2017-02-13T09:17:03,253][INFO ][o.e.n.Node    ] node name [ntIFoHQ] derived from node ID [ntIFoHQnTAahC7_0cEt32Q]; set [node.name] to override 
[2017-02-13T09:17:03,257][INFO ][o.e.n.Node    ] version[5.2.0], pid[1], build[24e05b9/2017-01-24T19:52:35.800Z], OS[Linux/4.4.43-boot2docker/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_111-internal/25.111-b14] 
[2017-02-13T09:17:05,249][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [aggs-matrix-stats] 
[2017-02-13T09:17:05,250][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [ingest-common] 
[2017-02-13T09:17:05,251][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [lang-expression] 
[2017-02-13T09:17:05,251][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [lang-groovy] 
[2017-02-13T09:17:05,251][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [lang-mustache] 
[2017-02-13T09:17:05,251][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [lang-painless] 
[2017-02-13T09:17:05,251][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [percolator] 
[2017-02-13T09:17:05,251][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [reindex] 
[2017-02-13T09:17:05,254][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [transport-netty3] 
[2017-02-13T09:17:05,254][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] loaded module [transport-netty4] 
[2017-02-13T09:17:05,254][INFO ][o.e.p.PluginsService  ] [ntIFoHQ] no plugins loaded 
[2017-02-13T09:17:05,677][WARN ][o.e.d.s.g.GroovyScriptEngineService] [groovy] scripts are deprecated, use [painless] scripts instead 
[2017-02-13T09:17:10,757][INFO ][o.e.n.Node    ] initialized 
[2017-02-13T09:17:10,757][INFO ][o.e.n.Node    ] [ntIFoHQ] starting ... 
[2017-02-13T09:17:11,015][WARN ][i.n.u.i.MacAddressUtil ] Failed to find a usable hardware address from the network interfaces; using random bytes: 07:0a:ef:37:62:95:b2:77 
[2017-02-13T09:17:11,198][INFO ][o.e.t.TransportService ] [ntIFoHQ] publish_address {192.168.99.100:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} 
[2017-02-13T09:17:11,203][INFO ][o.e.b.BootstrapChecks ] [ntIFoHQ] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks 
[2017-02-13T09:17:14,351][INFO ][o.e.c.s.ClusterService ] [ntIFoHQ] new_master {ntIFoHQ}{ntIFoHQnTAahC7_0cEt32Q}{cW1MZt0-RmutLXz_Tkm8mw}{192.168.99.100}{192.168.99.100:9300}, reason: zen-disco-elected-as-master ([0] nodes joined) 
[2017-02-13T09:17:14,395][INFO ][o.e.h.HttpServer   ] [ntIFoHQ] publish_address {172.17.0.2:9201}, bound_addresses {[::]:9201} 
[2017-02-13T09:17:14,396][INFO ][o.e.n.Node    ] [ntIFoHQ] started 
[2017-02-13T09:17:14,423][INFO ][o.e.g.GatewayService  ] [ntIFoHQ] recovered [0] indices into cluster_state 
[2017-02-13T09:17:44,398][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:17:44,398][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2017-02-13T09:18:14,434][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:18:44,438][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:18:44,438][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2017-02-13T09:19:14,443][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:19:44,446][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:19:44,447][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2017-02-13T09:20:14,453][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:20:44,459][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:20:44,459][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2017-02-13T09:21:14,467][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:21:44,471][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:21:44,471][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2017-02-13T09:22:14,482][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:22:44,485][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 
[2017-02-13T09:22:44,485][INFO ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] rerouting shards: [high disk watermark exceeded on one or more nodes] 
[2017-02-13T09:23:14,497][WARN ][o.e.c.r.a.DiskThresholdMonitor] [ntIFoHQ] high disk watermark [90%] exceeded on [ntIFoHQnTAahC7_0cEt32Q][ntIFoHQ][/usr/share/elasticsearch/data/nodes/0] free: 1gb[5.7%], shards will be relocated away from this node 

ответ

0

Вы хотите использовать http.publish_host=192.168.99.100 в сочетании с http.port

 Смежные вопросы

  • Нет связанных вопросов^_^