k8s 通过vagrant + ansible 一键安装,最后安装prometheus 时出现node exporter只有一个使用的情况处理

架构

vagrant 开启3个虚拟机,1master ,2 node . 每节点两个网卡。eth0 为内部网卡只能通过nat上网,eth1为私有网卡,可以虚拟机之间互通。

role | master | node 1 | node 2
—|— | — | —
ip 私有 eth1 | 192.168.56.120/24| 192.168.56.121/24| 192.168.56.122/24
内部网卡 eth0 | 10.0.2.15/24 | 10.0.2.15/24 |10.0.2.15/24
默认路由| 10.0.2.2 |10.0.2.2|10.0.2.2
vagrant file :

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://vagrantcloud.com/search.
  config.vm.box = "centos/7"
  #config.ssh.keys_only=false
  #config.ssh.username='root' 

  config.vm.provider "virtualbox" do |v|
  v.default_nic_type = "82543GC"
  end
  # Disable automatic box update checking. If you disable this, then
  # boxes will only be checked for updates when the user runs
  # `vagrant box outdated`. This is not recommended.
  # config.vm.box_check_update = false

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8080" will access port 80 on the guest machine.
  # NOTE: This will enable public access to the opened port
  # config.vm.network "forwarded_port", guest: 80, host: 8080

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine and only allow access
  # via 127.0.0.1 to disable public access
  # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  # config.vm.network "private_network", ip: "192.168.33.10"

  # Create a public network, which generally matched to bridged network.
  # Bridged networks make the machine appear as another physical device on
  # your network.
  # config.vm.network "public_network"

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  # config.vm.synced_folder "../data", "/vagrant_data"

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
  config.vm.provider "virtualbox" do |vb|
  #   # Display the VirtualBox GUI when booting the machine
  #   vb.gui = true
  #
  #   # Customize the amount of memory on the VM:
     vb.memory = "4024"
   end
  #
  # View the documentation for the provider you are using for more
  # information on available options.

  # Enable provisioning with a shell script. Additional provisioners such as
  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
  # documentation for more information about their specific syntax and use.
  # config.vm.provision "shell", inline: <<-SHELL
  #   apt-get update
  #   apt-get install -y apache2
  # SHELL
  #
  config.vm.provision "shell",path: "/home/loony/vagrant/script/premit_root_login.sh"
    config.vm.define "master" do |master|
    master.vm.provider "master" do |m|
      m.memory = 512
      m.cpus = 2
      m.name = "master"
    end
    #端口转发
    #master.vm.network "forwarded_port", guest: 22, host: 2022
    #私有网
    master.vm.network "private_network", ip: "192.168.56.120"
    #安装ansible的脚本,放在vagrantfile同目录下
    master.vm.hostname = "master"
  end

  config.vm.define "node1" do |node1|
    node1.vm.provider "mode1" do |n1|
      n1.memory = 512
      n1.cpus = 1
      n1.name = "node1"
    end
    #node1.vm.network "forwarded_port", guest: 22, host: 2023
    node1.vm.network "private_network", ip: "192.168.56.121"
    node1.vm.hostname = "node1"
  end

  config.vm.define "node2" do |node2|
    node2.vm.provider "mode2" do |n2|
      n2.memory = 512
      n2.cpus = 1
      n2.name = "node2"
    end
    #node2.vm.network "forwarded_port", guest: 22, host: 2024
    node2.vm.network "private_network", ip: "192.168.56.122"
    node2.vm.hostname = "node2"
  end
end

然后vagrant up 启动的虚拟机。
k8s 安装 参考 注意 网络插件选flannel,其中flannel插件会由于有两个网络接口导致网络不通,需要更改接口为eth1

#安装 prometheus:
 git clone https://github.com/coreos/prometheus-operator.git
 cd prometheus-operator 
 git tag list 
 #我们取出v0.26.0 的代码,因为最新的代码里面已经没有现在用的示例了,所以用以前的示例。
 git checkout v0.26.0
 cd contrib/kube-prometheus/
 kubectl apply -f  .
 #这样 建好了。然后需要改下promethues的svc为nodeport 映射出来,这里就不写了 。

最后我们访问的时候发现只有master的Nodeexport 是好的,其他两个是有问题的。

Endpoint | State | Labels | Last Scrape | Scrape Duration | Error
— | — | — | — |— | —
https://192.168.56.120:9100/metrics | up | endpoint="https" instance="192.168.56.120:9100" job="node-exporter" namespace="monitoring" pod="node-exporter-rkt8t" service="node-exporter" | 14.142s ago |17.93ms|
https://192.168.56.121:9100/metrics | down | endpoint="https" instance="192.168.56.121:9100" job="node-exporter" namespace="monitoring" pod="node-exporter-ct5c9" service="node-exporter" | 25.897s ago |10s | context deadline exceeded
https://192.168.56.122:9100/metrics | up |endpoint="https" instance="192.168.56.122:9100" job="node-exporter" namespace="monitoring" pod="node-exporter-scqfz" service="node-exporter" |17.174s ago | 40ms |
上面122 也是up 是后来我修改OK后的状态,最先开始也是down 。

分析问题
  1. 首先看下pod是不是正常的。
    查看pod的状态和日志,发现了一个异常。
[root@node2 ~]# kubectl get pod -n monitoring  -o wide 
NAME                                   READY     STATUS    RESTARTS   AGE       IP               NODE
alertmanager-main-0                    2/2       Running   16         11d       172.20.2.41      192.168.56.121
alertmanager-main-1                    2/2       Running   4          4d        172.20.2.45      192.168.56.121
alertmanager-main-2                    2/2       Running   8          11d       172.20.2.43      192.168.56.121
grafana-6fbd447b7f-bxnzc               1/1       Running   2          4d        172.20.1.35      192.168.56.122
kube-state-metrics-7fbb5c8dcf-jr7mk    4/4       Running   8          4d        172.20.1.36      192.168.56.122
node-exporter-ct5c9                    2/2       Running   4          5d        192.168.56.121   192.168.56.121
node-exporter-rkt8t                    2/2       Running   4          5d        192.168.56.120   192.168.56.120
node-exporter-scqfz                    2/2       Running   6          5d        192.168.56.122   192.168.56.122
prometheus-adapter-fdc4c474d-8btlf     1/1       Running   2          4d        172.20.2.49      192.168.56.121
prometheus-k8s-0                       3/3       Running   7          4d        172.20.1.34      192.168.56.122
prometheus-k8s-1                       3/3       Running   13         11d       172.20.2.40      192.168.56.121
prometheus-operator-78c7bdf4cd-k6pt2   1/1       Running   4          11d       172.20.2.47      192.168.56.121
[root@node2 ~]# kubectl logs  node-exporter-ct5c9  kube-rbac-proxy -n monitoring 

E0916 07:53:17.947827    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:53:17.947867    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:53:25.351614    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:53:25.351700    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:53:47.852277    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:53:47.852361    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:53:54.949203    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:53:54.949237    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:54:17.649011    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:54:17.649042    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:54:24.948506    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:54:24.948611    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:54:47.749273    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:54:47.749355    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:54:55.048066    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:54:55.048104    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:55:17.849120    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:55:17.849206    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:55:24.951139    7843 webhook.go:106] Failed to make webhook authenticator request: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
E0916 07:55:24.951190    7843 proxy.go:67] Unable to authenticate the request due to an error: Post https://10.68.0.1:443/apis/authentication.k8s.io/v1beta1/tokenreviews: dial tcp 10.68.0.1:443: i/o timeout
#然后查到10.68.0.1这个ip 是 api验证的一个端口 。
[root@node2 ~]# kubectl describe svc kubernetes 
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.68.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         192.168.56.120:6443
Session Affinity:  None
Events:            <none>

基本上情况比较清楚了,无法从api接口获取身份认证,自然无法输出信息。
2. 那么这里主要就要排查下网络情况了,我的规则转发采用的是iptables ,所以 着重找了下其中的规则。

[root@node1 ~]# iptables -t nat -L -n | grep 10.68.0.1
KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  0.0.0.0/0            10.68.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443
[root@node1 ~]# iptables -t nat -L -n | grep  KUBE-SVC-NPX46M4PTMTKRN6Y  -C 4
KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  0.0.0.0/0            10.68.0.2            /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
KUBE-SVC-XGLOHA7QRQ3V22RZ  tcp  --  0.0.0.0/0            10.68.199.173        /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:443
KUBE-SVC-GRVIJZ6QHJZF73YT  tcp  --  0.0.0.0/0            10.68.167.109        /* monitoring/prometheus-adapter:https cluster IP */ tcp dpt:443
KUBE-SVC-LC5QY66VUV2HJ6WZ  tcp  --  0.0.0.0/0            10.68.215.148        /* kube-system/metrics-server: cluster IP */ tcp dpt:443
KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  0.0.0.0/0            10.68.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443
KUBE-NODEPORTS  all  --  0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-AWA2CQSXVI7X2GE5 (2 references)
target     prot opt source               destination         
--
KUBE-SEP-Z7DXBDWOKJUUV3R6  all  --  0.0.0.0/0            0.0.0.0/0            /* monitoring/alertmanager-main:web */ statistic mode random probability 0.33332999982
KUBE-SEP-CUVR46UWO3RNXTFX  all  --  0.0.0.0/0            0.0.0.0/0            /* monitoring/alertmanager-main:web */ statistic mode random probability 0.50000000000
KUBE-SEP-DPH3VEMYYJMD3X5O  all  --  0.0.0.0/0            0.0.0.0/0            /* monitoring/alertmanager-main:web */

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target     prot opt source               destination         
KUBE-SEP-SOBI5KFPYF53UNHB  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */

Chain KUBE-SVC-QVMAL4WIBXZZ2IW5 (1 references)
[root@node1 ~]# iptables -t nat -L -n | grep  KUBE-SEP-SOBI5KFPYF53UNHB  -C 4
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  172.20.1.34          0.0.0.0/0            /* monitoring/prometheus-k8s:web */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* monitoring/prometheus-k8s:web */ recent: SET name: KUBE-SEP-SMCLGLW5BZIBRCEZ side: source mask: 255.255.255.255 tcp to:172.20.1.34:9090

Chain KUBE-SEP-SOBI5KFPYF53UNHB (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  192.168.56.120       0.0.0.0/0            /* default/kubernetes:https */
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ tcp to:192.168.56.120:6443

--
KUBE-SEP-DPH3VEMYYJMD3X5O  all  --  0.0.0.0/0            0.0.0.0/0            /* monitoring/alertmanager-main:web */

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target     prot opt source               destination         
KUBE-SEP-SOBI5KFPYF53UNHB  all  --  0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */

Chain KUBE-SVC-QVMAL4WIBXZZ2IW5 (1 references)
target     prot opt source               destination         
KUBE-SEP-JYEVYFUNMWZUWRAQ  all  --  0.0.0.0/0            0.0.0.0/0            /* monitoring/kube-state-metrics:https-main */

分析以上规则,发现dnat规则是正常的,只不过k8s 建立规则跳的有点晕而已,总体来说是将目的地址为10.68.0.1:443 dnat到192.168.56.120:6443
在机器上telnet 也存在问题。抓包发现只有syn的数据包,而没有其他的包。
另起一个终端 tcpdump -i eth1 tcp port 6443 -w /tmp/xxx.cap
然后在本终端 里面运行 telnet 10.68.0.1 443

9   0.094555    10.0.2.15   192.168.56.120  TCP 74  54660 → 6443 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=4979508 TSecr=0 WS=128
42  2.098645    10.0.2.15   192.168.56.120  TCP 74  [TCP Retransmission] 54660 → 6443 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=4981512 TSecr=0 WS=128
99  6.106390    10.0.2.15   192.168.56.120  TCP 74  [TCP Retransmission] 54660 → 6443 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=4985520 TSecr=0 WS=128

总体来说,确实是数据包没有回传过来。而且src也有问题,照说应该是node的集群ip而不是eth0的IP,继续搜索资料。
参考了下 接跟踪nf_conntrack与NAT和状态防火墙 里面的步骤,对nat链接进行了下追踪,发现了问题:

[root@node1 ~]# conntrack -L | grep 10.68.0.1 
tcp      6 41 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=50468 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=50468 mark=0 use=2
tcp      6 108 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=51136 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=51136 mark=0 use=1
tcp      6 119 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=51320 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=51320 mark=0 use=1
tcp      6 86394 ESTABLISHED src=172.20.2.49 dst=10.68.0.1 sport=39148 dport=443 src=192.168.56.120 dst=192.168.56.121 sport=6443 dport=48813 [ASSURED] mark=0 use=1
tcp      6 18 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=50280 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=50280 mark=0 use=1
tcp      6 11 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=50216 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=50216 mark=0 use=1
tcp      6 86399 ESTABLISHED src=172.20.2.44 dst=10.68.0.1 sport=59716 dport=443 src=192.168.56.120 dst=192.168.56.121 sport=6443 dport=58480 [ASSURED] mark=0 use=1
tcp      6 86379 ESTABLISHED src=172.20.2.47 dst=10.68.0.1 sport=56254 dport=443 src=192.168.56.120 dst=192.168.56.121 sport=6443 dport=49901 [ASSURED] mark=0 use=1
tcp      6 86394 ESTABLISHED src=172.20.2.48 dst=10.68.0.1 sport=54932 dport=443 src=192.168.56.120 dst=192.168.56.121 sport=6443 dport=61273 [ASSURED] mark=0 use=1
tcp      6 86373 ESTABLISHED src=172.20.2.50 dst=10.68.0.1 sport=47294 dport=443 src=192.168.56.120 dst=192.168.56.121 sport=6443 dport=35330 [ASSURED] mark=0 use=1
tcp      6 78 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=50884 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=50884 mark=0 use=1
tcp      6 116 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=51060 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=51060 mark=0 use=2
tcp      6 86399 ESTABLISHED src=172.20.2.40 dst=10.68.0.1 sport=42240 dport=443 src=192.168.56.120 dst=192.168.56.121 sport=6443 dport=28707 [ASSURED] mark=0 use=1
tcp      6 86399 ESTABLISHED src=172.20.2.46 dst=10.68.0.1 sport=38352 dport=443 src=192.168.56.120 dst=192.168.56.121 sport=6443 dport=19299 [ASSURED] mark=0 use=1
tcp      6 71 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=50744 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=50744 mark=0 use=1
tcp      6 101 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=51072 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=51072 mark=0 use=1
conntrack v1.4.4 (conntrack-tools): 536 flow entries have been shown.
tcp      6 48 SYN_SENT src=10.0.2.15 dst=10.68.0.1 sport=50530 dport=443 [UNREPLIED] src=192.168.56.120 dst=10.0.2.15 sport=6443 dport=50530 mark=0 use=1
tcp      6 86394 ESTABLISHED src=172.20.2.42 dst=10.68.0.1 sport=47142 dport=443 src=192.168.56.120 dst=192.168.56.121 sport=6443 dport=6262 [ASSURED] mark=0 use=1

发现几个syn_sent的 src 都是 10.0.2.15 ,这个src做了dnat 后不会更改,那么56.120上接到数据包后返回给10.0.2.15的时候就会无法到达本机。
继续查看 看了下路由表 瞬间明白了

[root@node1 ~]# ip route 
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 
172.20.0.0/24 via 172.20.0.0 dev flannel.1 onlink 
172.20.1.0/24 via 172.20.1.0 dev flannel.1 onlink 
172.20.2.0/24 dev cni0 proto kernel scope link src 172.20.2.1 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.121 metric 101

默认路由为10.0.2.2 那么,访问10.68.0.1的时候会使用eth0的ip作为原地址。。 需要手动添加一个路由信息。然后telnet 就通了


[root@node1 ~]# ip route add 10.68.0.0/24 via 192.168.56.1
[root@node1 ~]# telnet 10.68.0.1 443
Trying 10.68.0.1…
Connected to 10.68.0.1.
Escape character is ‘^]’.

“`

总结下: 使用vbox和vagrant 搭建k8s的环境,还是有蛮多坑的,主要是网络这块,flannel的网络需要指定出口的接口,这里的svc的路由信息也需要手动指定下,不然也会出问题。

About: loony


发表评论

电子邮件地址不会被公开。 必填项已用*标注

Captcha Code