https://pastebin.com/gqPLwSFq

^ output of my resolv.conf and cloudflare logs

kube-system kube-dns ClusterIP 10.90.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d15h

^ my service ip for kubedns

https://pastebin.com/BCBhh8aj

^ my cloudflare config

How come, despite there being no mention of 8.8.8.8 on my system, in any other dns file for kubedns, not in my resolv.conf, tunnels, is now, incorrectly, trying to use that, to resolve internal ips, it does not make any sense

I think internal DNS resolution is overall working fine, here is a example of me accessing traefik from one of my pods:

spiderunderurbed@raspberrypi:~/k8s $ kubectl exec -it wordpress-7767b5d9c4-qh59n -- curl traefik.default.svc.cluster.local 
404 page not found
spiderunderurbed@raspberrypi:~/k8s $ 

^ means traefik was accessed, it is accessed as its my ingress, and there is nothing about 8.8.8.8 in there, might be baked in my CF.

  • SpiderUnderUrBed@lemmy.zipOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Nevermind, fixed, this is what I tried applying, or maybe i should have waited for a bit and it might of worked, regardless, just incase its useful to anyone:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns
      namespace: kube-system
    data:
      Corefile: |
        .:53 {
            errors
            health
            ready
            kubernetes cluster.local in-addr.arpa ip6.arpa {
                pods insecure
                fallthrough in-addr.arpa ip6.arpa
            }
            hosts /etc/coredns/NodeHosts {
                ttl 60
                reload 15s
                fallthrough
            }
            prometheus :9153
            forward . 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4
            cache 30
            loop
            reload
            loadbalance
        }
    
    

    The issue is solved now, thanks