AWX unable to resolve local domain servers: Customizing Kubernetes k3s CoreDNS

In my last post, I created a container group for linking AWX with my domain Kerberos for authentication against Windows hosts. It turned out my AWX POD was unable to lookup any of my Windows domain servers. Simple testing showed it could reach the host on the right port.

kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup hosta.contoso.com

Obviously instead of hosta.contoso.com I was using an actual host in my actual domain. I thought my next step was to create another Container group linking my Linux hosts’ /etc/resolve file with my Execution environment, but that would not work. After some googling I saw some other were having similar issues and they resolved by updating Kubernetes CoreDNS to forward all queries for my local domain to one of my local domain DNS servers.

To play it safe, I copied my existing CoreDNS configuration by running the following:

kubectl -n kube-system get configmap coredns -o yaml

I saved the output of that to a file called coredns-custom.yml and added a forwarder section for my internal domain.

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
          ttl 60
          reload 15s
          fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
    contoso.com:53 {
       errors
       cache 30
       forward . 10.5.1.53
    }
    import /etc/coredns/custom/*.server
  NodeHosts: |
    10.5.1.8 localhost.localdomain
kind: ConfigMap
metadata:
  annotations:
    objectset.rio.cattle.io/applied: H4sIAAAAAAAA/4yQwWrzMBCEX0Xs2fEf20nsX9BDybH02lMva2kdq1Z2g6SkBJN3L8IUCiVtbyNGOzvfzoAn90IhOmHQcKmgAIsJQc+wl0CD8wQaSr1t1PzKSilFIUiIix4JfRoXHQjtdZHTuafAlCgq488xUSi9wK2AybEFDXvhwR2e8QQFHCnh50ZkloTJCcf8lP6NTIqUyuCkNJiSp9LJP5czoLjryztTWB0uE2iYmvjFuVSFenJsHx6tFf41gvGY6Y0Eshz/9D2e0OSZfIJVvMZExwzusSf/I9SIcQQNvaG6a+r/XVdV7abBddPtsN9W66Eedi0N7aberM22zaHf6t0tcPsIAAD//8Ix+PfoAQAA
    objectset.rio.cattle.io/id: ""
    objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
    objectset.rio.cattle.io/owner-name: coredns
    objectset.rio.cattle.io/owner-namespace: kube-system
  creationTimestamp: "2023-01-24T18:28:23Z"
  labels:
    objectset.rio.cattle.io/hash: bce283298811743a0386ab510f2f67ef74240c57
  name: coredns
  namespace: kube-system

Now you can apply the new forwarder to Kubernetes CoreDNS with the follwoing command:

kubectl apply -f coredns-custom.yml

#You can test it applied and worked by running:

kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup hosta.contoso.com

Now my Kubernetes DNS was resolving as expected and in turn so was AWX!

Using AWX Container groups for Kerberos authentication of playbooks/templates running against Windows servers/hosts

I have been porting some of my Ansible playbooks for Windows over to AWX and while they worked in my home lab, they didn’t cooperate when I moved them over to my work environment. This is because initially I was testing on stand-alone windows servers and clients in my home lab. In my office environment we obviously use a Windows AD domain. In Ansible cli, I would just setup Kerberos authentication on my Ansible host. This is not as easy when dealing with AWX running on Kubernetes Pods.

In this situation I will use the stock “AWX EE (latest)” Execution Environment, but with that you will need to configure AWX on how to access your Kerberos server (AD server). We will need to configure a Container Group that will be linked to the Ansible Execution Environment which lets Ansible know about your Kerberos environment. If you haven’t already configured your Windows hosts for connections via WinRM, you can read the following documentation. My environment was already setup for this since I have already been controlling/automating my Windows servers via Ansible cli.

To prepare Kubernetes for this container group, you will need to create a config map that will handle your Kerberos authentication. In your favorite editor (mines vi), create a file in your home directory or “/tmp” called krb5.conf. In my example below I have two domains listed because my AWX host works on two domains.

[libdefaults]
 default_realm = CONTOSO.COM

[realms]
 CONTOSO.COM = {
  kdc = DC2.CONTOSO.COM
 }
 STUFF.COM = {
  kdc = DOUBLE.STUFF.COM
}

[domain_realm]
.contoso.com = CONTOSO.COM
contoso.com = CONTOSO.COM
.stuff.com = STUFF.COM
stuff.com = STUFF.COM

Now we can map this file with Kubernetes by doing the following:

kubectl -n awx create configmap awx-kerberos-config --from-file=krb5.conf

Now your krb5.conf is mapped in Kubernetes, you will want to ensure it has been created by running the following:

kubectl -n awx get configmap awx-kerberos-config -o yaml

You should see output in yaml format that shows your krb.conf. Now in AWX, on the left column, click on “Instance Groups” under the Administration section:

In the “Instance Groups” menu, click “Add”, then “Add Container group”

In the new Container group menu, you can name it what you want, In my case I am naming it: Kerberos. The only other thing you will need to do is make sure you check: “Customize pod specification”

Now you will want to edit the “Custom pod spec” YAML, mine looks like:

apiVersion: v1
kind: Pod
metadata:
  namespace: awx
spec:
  serviceAccountName: default
  automountServiceAccountToken: false
  containers:
    - image: 'quay.io/ansible/awx-ee:latest'
      name: worker
      args:
        - ansible-runner
        - worker
        - '--private-data-dir=/runner'
      resources:
        requests:
          cpu: 250m
          memory: 100Mi
      volumeMounts:
        - name: awx-kerberos-volume
          mountPath: /etc/krb5.conf
          subPath: krb5.conf
  volumes:
    - name: awx-kerberos-volume
      configMap:
        name: awx-kerberos-config

Make sure you save when your done. Now we will need to link this Container group to your template (same as a playbook in Ansible cli). To link the Container group, edit your template/playbook and towards the bottom of the page, you will see “Instance Groups”, from there you will select you Container group.

Now you should be able to run your windows based playbooks/templates in AWX. For me my issue was not solved there. I had some extra trouble shooting that I had to do which turned out to be Kubernetes k3s DNS issues that I will talk about in my next post. If you need assistance troubleshooting you can refer to the README located here. You can always contact me as well.

Installing AWX on Alma Linux 9.1

Back in November of 2022, I went about installing and configuring AWX in my home lab on Alma Linux 9, during that install I ran into and resolved a few minor issues during the install (see my last post: https://chr00t.com/installing-awx-on-almalinux-9/).

Earlier this week I decided to install AWX at my work since my testing at home went well. My reason for installing AWX at the office is I have been using Ansible cli for years, but would really like the gui interface for my junior admins to be able to use ansible without having to know how to use the cli especially if they are geared more towards windows.

This install went extremely smooth:

  • After a fresh install of Alma, run updates: yum -y update
  • Install tar & git: yum -y install tar git
  • Disable firewall: systemctl disable firewalld –now
  • Disable selinux via your favorite editor (mine is vi… vi 4 life lol): vi /etc/sysconfig/selinux
  • reboot your system
  • Install Kubernetes k3s: curl -sfL https://get.k3s.io | sh –
  • Download Kustumize: curl -s “https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh” | bash
  • move kustomize executable: mv kustomize /usr/local/bin/
  • you can check your k3s version via: kubectl version
You should see something like:
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.5+k3s2", GitCommit:"de654222cb2c0c21776c32c26505fb684b246a1b", GitTreeState:"clean", BuildDate:"2023-01-11T21:23:33Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.5+k3s2", GitCommit:"de654222cb2c0c21776c32c26505fb684b246a1b", GitTreeState:"clean", BuildDate:"2023-01-11T21:23:33Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
  • Now you can check your k3s environment status via: kubectl get nodes
You should see something like this:
NAME                    STATUS   ROLES                  AGE    VERSION
localhost.localdomain   Ready    control-plane,master   4d2h   v1.25.5+k3s2
  • Create kustomization.yaml for initial creation of AWX operator pods: vi kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # Find the latest tag here: https://github.com/ansible/awx-operator/releases
  - github.com/ansible/awx-operator/config/default?ref=1.1.4
  
# Set the image tags to match the git version from above
images:
  - name: quay.io/ansible/awx-operator
    newTag: 1.1.4

# Specify a custom namespace in which to install AWX
namespace: awx


  • Now that you have kustomization.yaml ready to go, you can start your initial operator of AWX:
  • kustomize build . | kubectl apply -f –
  • After running the above command you can check the status of the pods build via: kubectl get pods -n awx
  • Once your awx operator pods are ready we can install your awx instance by 1st creating a file: vi awx-demo.yaml
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx
spec:
  service_type: nodeport
  nodeport_port: 30080
  #projects_persistence: true
  #projects_storage_class: rook-ceph
  #projects_storage_size: 10Gi
  • Now we need to edit kustomization.yaml again: vi kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # Find the latest tag here: https://github.com/ansible/awx-operator/releases
  - github.com/ansible/awx-operator/config/default?ref=1.1.4
  - awx-demo.yaml

# Set the image tags to match the git version from above
images:
  - name: quay.io/ansible/awx-operator
    newTag: 1.1.4

# Specify a custom namespace in which to install AWX
namespace: awx
  • Lets set the default naming via: kubectl config set-context –current –namespace=awx
  • Now we we build our awx instance: kustomize build . | kubectl apply -f –
  • The above step will take awhile, you can review the logs with the following command: kubectl logs -f deployments/awx-operator-controller-manager -c awx-manager
  • You will need to watch the logs until you see: PLAY RECAP ******************************* localhost : ok =69 changed=0 unreachable=0 failed=0….etc….
  • You will want to make sure you see no failed or unreachables, otherwise blow it away your awx operator and redploy/retry.
  • Provided every thing went well, run: kubectl get pods -n awx
You should see something like:
NAME                                               READY   STATUS    RESTARTS      AGE
awx-postgres-13-0                                  1/1     Running   2 (43h ago)   4d4h
awx-67d97b57d9-hdtqb                               4/4     Running   8 (43h ago)   4d4h
awx-operator-controller-manager-78c7c99946-7dcm9   2/2     Running   8 (43h ago)   4d5h
[root@localhost ~]# ^C
  • Lastly you will need to gather the temp password created during the install via: kubectl get secret awx-demo-admin-password -o jsonpath=”{.data.password}” | base64 –decode
  • Now you can logon to your awx bild by opening your browser pointing to your servers ip: https://servernameorip:30080
  • username will be admin and password will be the password you recieved by running: kubectl get secret awx-demo-admin-password -o jsonpath=”{.data.password}” | base64 –decode

If you are unfamiliar with AWX itself, I recommend watching the following video: