AWX unable to resolve local domain servers: Customizing Kubernetes k3s CoreDNS

In my last post, I created a container group for linking AWX with my domain Kerberos for authentication against Windows hosts. It turned out my AWX POD was unable to lookup any of my Windows domain servers. Simple testing showed it could reach the host on the right port.

kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup hosta.contoso.com

Obviously instead of hosta.contoso.com I was using an actual host in my actual domain. I thought my next step was to create another Container group linking my Linux hosts’ /etc/resolve file with my Execution environment, but that would not work. After some googling I saw some other were having similar issues and they resolved by updating Kubernetes CoreDNS to forward all queries for my local domain to one of my local domain DNS servers.

To play it safe, I copied my existing CoreDNS configuration by running the following:

kubectl -n kube-system get configmap coredns -o yaml

I saved the output of that to a file called coredns-custom.yml and added a forwarder section for my internal domain.

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
          ttl 60
          reload 15s
          fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
    contoso.com:53 {
       errors
       cache 30
       forward . 10.5.1.53
    }
    import /etc/coredns/custom/*.server
  NodeHosts: |
    10.5.1.8 localhost.localdomain
kind: ConfigMap
metadata:
  annotations:
    objectset.rio.cattle.io/applied: H4sIAAAAAAAA/4yQwWrzMBCEX0Xs2fEf20nsX9BDybH02lMva2kdq1Z2g6SkBJN3L8IUCiVtbyNGOzvfzoAn90IhOmHQcKmgAIsJQc+wl0CD8wQaSr1t1PzKSilFIUiIix4JfRoXHQjtdZHTuafAlCgq488xUSi9wK2AybEFDXvhwR2e8QQFHCnh50ZkloTJCcf8lP6NTIqUyuCkNJiSp9LJP5czoLjryztTWB0uE2iYmvjFuVSFenJsHx6tFf41gvGY6Y0Eshz/9D2e0OSZfIJVvMZExwzusSf/I9SIcQQNvaG6a+r/XVdV7abBddPtsN9W66Eedi0N7aberM22zaHf6t0tcPsIAAD//8Ix+PfoAQAA
    objectset.rio.cattle.io/id: ""
    objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
    objectset.rio.cattle.io/owner-name: coredns
    objectset.rio.cattle.io/owner-namespace: kube-system
  creationTimestamp: "2023-01-24T18:28:23Z"
  labels:
    objectset.rio.cattle.io/hash: bce283298811743a0386ab510f2f67ef74240c57
  name: coredns
  namespace: kube-system

Now you can apply the new forwarder to Kubernetes CoreDNS with the follwoing command:

kubectl apply -f coredns-custom.yml

#You can test it applied and worked by running:

kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- nslookup hosta.contoso.com

Now my Kubernetes DNS was resolving as expected and in turn so was AWX!

Using AWX Container groups for Kerberos authentication of playbooks/templates running against Windows servers/hosts

I have been porting some of my Ansible playbooks for Windows over to AWX and while they worked in my home lab, they didn’t cooperate when I moved them over to my work environment. This is because initially I was testing on stand-alone windows servers and clients in my home lab. In my office environment we obviously use a Windows AD domain. In Ansible cli, I would just setup Kerberos authentication on my Ansible host. This is not as easy when dealing with AWX running on Kubernetes Pods.

In this situation I will use the stock “AWX EE (latest)” Execution Environment, but with that you will need to configure AWX on how to access your Kerberos server (AD server). We will need to configure a Container Group that will be linked to the Ansible Execution Environment which lets Ansible know about your Kerberos environment. If you haven’t already configured your Windows hosts for connections via WinRM, you can read the following documentation. My environment was already setup for this since I have already been controlling/automating my Windows servers via Ansible cli.

To prepare Kubernetes for this container group, you will need to create a config map that will handle your Kerberos authentication. In your favorite editor (mines vi), create a file in your home directory or “/tmp” called krb5.conf. In my example below I have two domains listed because my AWX host works on two domains.

[libdefaults]
 default_realm = CONTOSO.COM

[realms]
 CONTOSO.COM = {
  kdc = DC2.CONTOSO.COM
 }
 STUFF.COM = {
  kdc = DOUBLE.STUFF.COM
}

[domain_realm]
.contoso.com = CONTOSO.COM
contoso.com = CONTOSO.COM
.stuff.com = STUFF.COM
stuff.com = STUFF.COM

Now we can map this file with Kubernetes by doing the following:

kubectl -n awx create configmap awx-kerberos-config --from-file=krb5.conf

Now your krb5.conf is mapped in Kubernetes, you will want to ensure it has been created by running the following:

kubectl -n awx get configmap awx-kerberos-config -o yaml

You should see output in yaml format that shows your krb.conf. Now in AWX, on the left column, click on “Instance Groups” under the Administration section:

In the “Instance Groups” menu, click “Add”, then “Add Container group”

In the new Container group menu, you can name it what you want, In my case I am naming it: Kerberos. The only other thing you will need to do is make sure you check: “Customize pod specification”

Now you will want to edit the “Custom pod spec” YAML, mine looks like:

apiVersion: v1
kind: Pod
metadata:
  namespace: awx
spec:
  serviceAccountName: default
  automountServiceAccountToken: false
  containers:
    - image: 'quay.io/ansible/awx-ee:latest'
      name: worker
      args:
        - ansible-runner
        - worker
        - '--private-data-dir=/runner'
      resources:
        requests:
          cpu: 250m
          memory: 100Mi
      volumeMounts:
        - name: awx-kerberos-volume
          mountPath: /etc/krb5.conf
          subPath: krb5.conf
  volumes:
    - name: awx-kerberos-volume
      configMap:
        name: awx-kerberos-config

Make sure you save when your done. Now we will need to link this Container group to your template (same as a playbook in Ansible cli). To link the Container group, edit your template/playbook and towards the bottom of the page, you will see “Instance Groups”, from there you will select you Container group.

Now you should be able to run your windows based playbooks/templates in AWX. For me my issue was not solved there. I had some extra trouble shooting that I had to do which turned out to be Kubernetes k3s DNS issues that I will talk about in my next post. If you need assistance troubleshooting you can refer to the README located here. You can always contact me as well.

Installing AWX on Alma Linux 9.1

Back in November of 2022, I went about installing and configuring AWX in my home lab on Alma Linux 9, during that install I ran into and resolved a few minor issues during the install (see my last post: https://chr00t.com/installing-awx-on-almalinux-9/).

Earlier this week I decided to install AWX at my work since my testing at home went well. My reason for installing AWX at the office is I have been using Ansible cli for years, but would really like the gui interface for my junior admins to be able to use ansible without having to know how to use the cli especially if they are geared more towards windows.

This install went extremely smooth:

  • After a fresh install of Alma, run updates: yum -y update
  • Install tar & git: yum -y install tar git
  • Disable firewall: systemctl disable firewalld –now
  • Disable selinux via your favorite editor (mine is vi… vi 4 life lol): vi /etc/sysconfig/selinux
  • reboot your system
  • Install Kubernetes k3s: curl -sfL https://get.k3s.io | sh –
  • Download Kustumize: curl -s “https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh” | bash
  • move kustomize executable: mv kustomize /usr/local/bin/
  • you can check your k3s version via: kubectl version
You should see something like:
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.5+k3s2", GitCommit:"de654222cb2c0c21776c32c26505fb684b246a1b", GitTreeState:"clean", BuildDate:"2023-01-11T21:23:33Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.5+k3s2", GitCommit:"de654222cb2c0c21776c32c26505fb684b246a1b", GitTreeState:"clean", BuildDate:"2023-01-11T21:23:33Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
  • Now you can check your k3s environment status via: kubectl get nodes
You should see something like this:
NAME                    STATUS   ROLES                  AGE    VERSION
localhost.localdomain   Ready    control-plane,master   4d2h   v1.25.5+k3s2
  • Create kustomization.yaml for initial creation of AWX operator pods: vi kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # Find the latest tag here: https://github.com/ansible/awx-operator/releases
  - github.com/ansible/awx-operator/config/default?ref=1.1.4
  
# Set the image tags to match the git version from above
images:
  - name: quay.io/ansible/awx-operator
    newTag: 1.1.4

# Specify a custom namespace in which to install AWX
namespace: awx


  • Now that you have kustomization.yaml ready to go, you can start your initial operator of AWX:
  • kustomize build . | kubectl apply -f –
  • After running the above command you can check the status of the pods build via: kubectl get pods -n awx
  • Once your awx operator pods are ready we can install your awx instance by 1st creating a file: vi awx-demo.yaml
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx
spec:
  service_type: nodeport
  nodeport_port: 30080
  #projects_persistence: true
  #projects_storage_class: rook-ceph
  #projects_storage_size: 10Gi
  • Now we need to edit kustomization.yaml again: vi kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # Find the latest tag here: https://github.com/ansible/awx-operator/releases
  - github.com/ansible/awx-operator/config/default?ref=1.1.4
  - awx-demo.yaml

# Set the image tags to match the git version from above
images:
  - name: quay.io/ansible/awx-operator
    newTag: 1.1.4

# Specify a custom namespace in which to install AWX
namespace: awx
  • Lets set the default naming via: kubectl config set-context –current –namespace=awx
  • Now we we build our awx instance: kustomize build . | kubectl apply -f –
  • The above step will take awhile, you can review the logs with the following command: kubectl logs -f deployments/awx-operator-controller-manager -c awx-manager
  • You will need to watch the logs until you see: PLAY RECAP ******************************* localhost : ok =69 changed=0 unreachable=0 failed=0….etc….
  • You will want to make sure you see no failed or unreachables, otherwise blow it away your awx operator and redploy/retry.
  • Provided every thing went well, run: kubectl get pods -n awx
You should see something like:
NAME                                               READY   STATUS    RESTARTS      AGE
awx-postgres-13-0                                  1/1     Running   2 (43h ago)   4d4h
awx-67d97b57d9-hdtqb                               4/4     Running   8 (43h ago)   4d4h
awx-operator-controller-manager-78c7c99946-7dcm9   2/2     Running   8 (43h ago)   4d5h
[root@localhost ~]# ^C
  • Lastly you will need to gather the temp password created during the install via: kubectl get secret awx-demo-admin-password -o jsonpath=”{.data.password}” | base64 –decode
  • Now you can logon to your awx bild by opening your browser pointing to your servers ip: https://servernameorip:30080
  • username will be admin and password will be the password you recieved by running: kubectl get secret awx-demo-admin-password -o jsonpath=”{.data.password}” | base64 –decode

If you are unfamiliar with AWX itself, I recommend watching the following video:

Installing AWX on AlmaLinux 9

I ran into some issues installing AWX on AlmaLinux 9 on Proxmox (I had the same issues with Alma 8.7). This also applies to RockyLinux 9.

I was installing AWX via Rancher following https://github.com/ansible/awx-operator#basic-install. I made it all the way to the section where you create the awx-demo.yaml, add it to your kustomization.yaml and build via kustomize build . | kubectl apply -f -. From there I was receiving errors such as “unable to determine if virtual resource”,”gvk”:”apps/v1″ and the build would ultimately fail out.

In order to make it past that error I found a found a few posts which suggested changing the CPU type from “Default (kvm64)” to Host. This sets the VM to match the CPU of the host.

***If you are running HyperV, there is a similar option, see the final post in this Google Group conversation: https://groups.google.com/g/awx-project/c/4tmP0TlRODU.***

After resetting the CPU type, rebooting the vm and re-running the kustomize build, I was able to make it quite a bit further. The logs looked like there were no issues, then towards the end the script once again failed. This time I was seeing the following error: “awx unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1:”. The Pod itself was also down with a CrashLoopBackOff error. From there I found the following link which was able to get me past all of my installation issues: https://stackoverflow.com/questions/62442679/could-not-get-apiversions-from-kubernetes-unable-to-retrieve-the-complete-list

I ran: kubectl api-resources which listed the resources and metrics.k8s.io/v1beta1 was in fact down.

Next I ran: kubectl delete apiservice/v1beta1.metrics.k8s.io

From there I re-ran the kustomize build command and awx installation completed successfully after the installation. I did have to open the firewall ports in Alma to allow my browser to access AWX.

Steps to Install AWX:

#Install Rancher
curl -sfL https://get.k3s.io | sh -

#Install Kustomize
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"  | bash

#Move Kustomize binary
mv kustomize /usr/local/bin/

#Goto AWX Readme and follow along from there:
# https://github.com/ansible/awx-operator#basic-install

Feel free to contact me if you have any comments or questions

Disabling Inactive Domain User and Computer Accounts in Active Directory with Ansible

In my last article I wrote about having Ansible run several audit requests including: “We need a list of all inactive user accounts” as well as “We need a list of inactive computer accounts”. Now that we have those listed, we can let Ansible clean those up. I preferred to create a new playbook for these tasks. First it will list the Users and Computers it will be handling first, next it will disable the account, followed by moving it to either the Inactive_Users or Inactive_Computers OU. I never delete the accounts as we prefer to disable, then move them.

Below is my ansible playbook “fix_AD_Inactive-Users-AND-Computers-90days.yml”

---
- hosts: pdc
  gather_facts: no
  tasks:
     - name: copy file to windows
       win_copy:
          src: files/fix_inactive_usr.ps1
          dest: c:\it\fix_inactive_usr.ps1

     - name: copy file to windows
       win_copy:
          src: files/fix_inactive_pc.ps1
          dest: c:\it\fix_inactive_pc.ps1

     - name: Fix inactive users - 90 days
       win_shell: c:\it\fix_inactive_usr.ps1
       register: inactive_usr

     - debug: var=inactive_usr.stdout_lines

     - name: Fix inactive computers - 90 days
       win_shell: c:\it\fix_inactive_pc.ps1
       register: inactive_computer

     - debug: var=inactive_computer.stdout_lines

Below is the code for “fix_inactive_usr.ps1”

$date = (get-date).AddDays(-90)

$USR = (Get-ADUser -Filter {LastLogonDate -lt $date} -Property Enabled | Where-Object {$_.Enabled -like "true"} | Select DistinguishedName).DistinguishedName
echo $USR
ForEach ($Item in $USR){
   Disable-ADAccount $Item
   Move-ADObject -Identity $Item -TargetPath "OU=Disabled_Accounts,DC=contoso,DC=com"
   }

Please note in the PowerShell scripts above and below, you will need to change “DC=contoso,DC=com” to reflect your actual domain

Below is the code for “fix_inactive_pc.ps1”

# Specify inactivity range value below
$DaysInactive = 90
# $time variable converts $DaysInactive to LastLogonTimeStamp property format for the -Filter switch to work

$time = (Get-Date).Adddays(-($DaysInactive))

# Identify inactive computer accounts

$PC = (Get-ADComputer -Filter {LastLogonTimeStamp -lt $time} -Property Enabled | Where-Object {$_.Enabled -like "true"} | Select DistinguishedName).DistinguishedName
echo $PC
ForEach ($Item in $PC){
   Disable-ADAccount $Item
   Move-ADObject -Identity $Item -TargetPath "OU=Disabled_Computers,DC=contoso,DC=com"
   }

Audit Active Directory with Ansible

Everyone loves an audit right? We have to deal with audits quite a bit and that requires remedial tasks like “We need a list of AD user accounts that have been locked out”, “We need a list of all inactive user accounts”, “We need a list of inactive computer accounts”, “We need a list of all members of Domain Admins group” as well as “We need a list of all AD accounts”. All of these requirements can easily be scripted with PowerShell. Since I love to automate things and I would rather not run these commands separately, I figured I would just create an Ansible script to run all request at the same time. that way I could logon once, select my Ansible playbook and let it run and I don’t even need to logon to the DC to run theses tasks. I can sit back and let Ansible deal with this.

This simple Ansible playbook uses 3 PowerShell commands and 2 PowerShell scripts that I’m sure most Windows Administrators are familiar with.

---
- hosts: pdc
  gather_facts: no
  tasks:
     - name: copy audit_AD_inactive_users.ps1 to Windows
       win_copy:
          src: files/audit_AD_inactive_users.ps1
          dest: c:\cit\audit_AD_inactive_users.ps1

     - name: copy audit_AD_inactive_computers.ps1 to Windows
       win_copy:
          src: files/audit_AD_inactive_computers.ps1
          dest: c:\cit\audit_AD_inactive_computers.ps1

     - name: Run Audit for Locked-Out Accounts
       win_shell: Search-AdAccount -LockedOut | select Name, LockedOut,LastLogonDate,distinguishedName
       register: lockedoutaccounts

     - debug: var=lockedoutaccounts.stdout_lines

     - name: Run Audit of inactive users - 90 days
       win_shell: c:\cit\audit_AD_inactive_users.ps1
       register: inactive_users

     - debug: var=inactive_users.stdout_lines

     - name: Run Audit of inactive computers - 90 days
       win_shell: c:\cit\audit_AD_inactive_computers.ps1
       register: inactive_computers

     - debug: var=inactive_computers.stdout_lines

     - name: Run Audit for members of Domain Admins group
       win_shell: Get-ADGroupMember -Identity 'Domain Admins' | Select-Object name, objectClass,distinguishedName
       register: dom_admin_users

     - debug: var=dom_admin_users.stdout_lines

     - name: Run Audit for all domain users
       win_shell: Get-ADUser -Filter * -SearchBase "dc=contoso,dc=com" | select Name, objectClass,distinguishedName
       register: all_dom_users

     - debug: var=all_dom_users.stdout_lines

Not bad right? Ansible Rocks! The only complaint I may see is I’m not outputting the results to a CSV file, but if you run this script often, you shouldn’t need the fancy format.

Below is the first PowerShell script “audit_AD_inactive_users.ps1”

$date = (get-date).AddDays(-90)

Get-ADUser -Filter {LastLogonDate -lt $date} -Property Enabled | Where-Object {$_.Enabled -like “true”} | Select Name, SamAccountName, DistinguishedName

Below is the second PowerShell script “audit_AD_inactive_computers.ps1”

# Specify inactivity range value below
$DaysInactive = 90
# $time variable converts $DaysInactive to LastLogonTimeStamp property format for the -Filter switch to work

$time = (Get-Date).Adddays(-($DaysInactive))

# Identify inactive computer accounts

Get-ADComputer -Filter {LastLogonTimeStamp -lt $time} -ResultPageSize 2000 -resultSetSize $null -Properties Name, OperatingSystem, SamAccountName, DistinguishedName, LastLogonDate | Select DNSHostName, LastLogonDate, DistinguishedName

HAProxy to the rescue

I have a client that did not want some of their employees having internet access due to loss of productivity. The employee workstations were on their own network that was firewalled off the regular network. The firewall allowed very limited access to the internal office network and no access to the internet.

They ran into an issue where some of the employees required access to a certain website to do their job. I could easily open a hole in the firewall to that site, but this site was hosted by AWS and the IP’s changed daily. I could continue adding in new IP’s, or I could go the proxy route. In the past I setup and configured a Squid proxy server to handle this, but I really wanted to see if I could get HAProxy to handle this. I knew HAProxy could forward web traffic, but that was to a specific site with static IP’s. I tested using HAProxy in http mode as well as tcp pointing to known ips and it would work until the ip changed.

After some searching, I found HAProxy was able to use DNS service discovery to detect server changes on the fly and then apply them to your system automatically. All I needed to do was add a DNS Resolvers configuration to my HAProxy config along with load balancing. I will post my configuration below with an explanation following. In the code below, I’m changing the name of the website the client is using to a more generic name like “fedex.com”

global
   stats socket :9000 mode 660 level admin

resolvers dns1
   nameserver dns1 192.168.3.53:53
   accepted_payload_size 8192 # allow larger DNS payloads

frontend https
   bind *:443
   option tcplog
   mode tcp
   default_backend fedex-https


backend fedex-https
   mode tcp
   balance source
   server-template fedex1 3 www.fedex.com:443 check resolvers dns1 init-addr none check inter 2000 rise 2 fall 5 verify none

The frontend listens on port 443 (Clients are directed to this in their proxy configuration via AD GPO). The backend server template will add (3) entries from DNS lookups to the backend. You would determine the number you want by first running a manual nslookup against the host you are looking to connect to and see how many results you get back, in my case I got 6, so I added 3 (you never want to go above the amount of servers your manual nslookup resolves). I could have easily set this number at 2 and the backend would swap between the first 2 host it gets when it checks DNS. In my actual configuration that I’m not showing, I set the number to (2). The “init-addr none” allows HAProxy to run if it is unable to resolve the hostname on startup.

Now I have a hole in my firewall allowing access from the firewalled employees to my HAProxy server only via port 443. I have an AD GPO that sets their computers to use my HAProxy server for internet access. They can try to go any other site and they get nothing. It only allows them to fedex.com.

A more detailed explanation can be found here:

https://www.haproxy.com/blog/dns-service-discovery-haproxy/

and:

https://www.haproxy.com/blog/client-ip-persistence-or-source-ip-hash-load-balancing/

Ansible – List all powered on VM’s to CSV

Sometimes we need to audit our VMWare environment and it is nice to have ansible gather this information in seconds into a format that is easy to import. This can take an hours long job down to seconds. I initially had an ansible script that would list all vm’s including powered templates, powered down or paused vms. That was nice, but I only really needed the powered on vms.

This grew from running the ansible script and manually scraping the output for what I needed which still took some time. The second iteration had me run the ansible script and “tee” the output to a file which I would then run a series of 12 sed statements against the file to gather the information I needed. That was great and it took less time, but I wanted to get it down to a one liner.

My third iteration is where I am at today. This is a one liner that isn’t pretty. I was able to join several sed statements into one, but the last 4 sed statements I still had to run separately due to the fact if they were joined to the first, the output wasn’t what I was expecting.

This is what I am using now (I will break it down after):

ansible-playbook VMWARE_list_all_powered-on_vms.yml --ask-vault-pass | sed -e '/"msg"/,$!d; /"msg"/d; / ____________/,$d; s/        {//g; s/        },//g; s/            "guest_name": "//g; s/            "ip_address": "//g; s/"//g'  > list.csv && sed -i '1d' list.csv && sed -i '/        }/,$d' list.csv && sed -i 'N;s/\n//' list.csv && sed -i '/^[[:space:]]*$/d' list.csv && cat list.csv

I know the above code is not too appealing to the eye (Feel free to message me if you have any suggestions). The first statement is running the ansible playbook “VMWARE_list_all_powered-on_vms.yml” Since I don’t want to store passwords in plain text, in this example I’m using ansible vault (there are better options out there). I am piping the output of ansible into 5 sed statements. The first sed statement is where I take the ansible output (which contains Ansible cowsay… which makes Ansible output fun) and do the following:

  1. Strip the first several lines of Ansible output down to the “msg”: [ line
  2. Remove the msg line
  3. Remove the trailing Ansible output (PLAY RECAP)
  4. Remove all lines starting with “{“
  5. Remove all lines starting with “},”
  6. Remove everything before and including “guest_name”: “
  7. Remove everything before and including “ip_address”: “
  8. remove all quotes and output to list.csv (I’m not finished yet)

Now I start separate sed statements because if I included them into one statement, the format wasn’t what I expected:

  1. Remove the first line of output
  2. Remove everything after and including “}”
  3. Join the VM name and IP address lines together
  4. Remove all lines with blank spaces

Below is an example of my output (Some vm’s did not include their ip. This has to do with VMware tools not being installed or running on the vm. I will follow that with the actual Ansible playbook):

COUNT DOOKU, 192.168.77.4
COUNT CHOCULA, 192.168.192.
COUNT VON COUNT - 123 AHHH HA HA, 192.168.3.192
COUNT DRACULA, 192.168.1.81
GREEDO, 192.168.3.3
POE, 192.168.3.8
TARKIN, 192.168.4.4
GENERAL GRIEVOUS, 192.168.3.7
GROGU - ITS BABY YODA FOOL, 192.168.3.159
DEATH STAR, 192.168.144.14
DEATH STAR 2, 192.168.192.15
BB-8, 192.168.1.199
JABBA, 192.168.192.86
FINN, 192.168.144.3.
MANDO,
TRAWN, 192.168.3.8
KIRK, 192.168.3.3
SPOCK, 192.168.3.176
DATA, 192.168.1.86
WORF, 192.168.5.7
PICARD, 192.168.1.178
RIKER, 192.168.19.84
McCOY, 192.168.9.81
La FORGE,
SCOTTY, 192.168.3.55
ARCHER, 192.168.19.99
RON BURGANDY, 192.168.144.3
SULU,
PIKE, 192.168.1.78
T'POL, 192.168.192.24
T-PAIN -LOL, 192.168.1.192
ENTERPRISE, 192.168.1.194
INTREPID, 192.168.3.49
USS VIRGINIA CGN-38, 192.168.8.38
USS LaSALLE AGF-3, 192.168.8.3
MISS PIGGY,
KERMIT THE FROG, 192.168.1.174
GONZO, 192.168.192.5
FOZZIE, 192.168.7.21
ANIMAL, 192.168.3.92
BEAKER, 192.168.192.26
ROWLF, 192.168.3.80
SCOOTER, 192.168.3.36
SAM EAGLE, 192.168.192.1
DR BUNSEN HONEYDEW,
STALER, 192.168.1.155
WALDORF, 192.168.3.58
SWEDISH CHEF - BORK BORK BORK, 192.168.1.3
PIGS IN SPACE, 192.168.1.48
RIZZO THE RAT, 192.168.144.16
FRANK RIZZO - LOL,
OSCAR, 192.168.4.9
BIG BIRD, 192.168.3.30
BERT, 192.168.5.45
ERNIE, 192.168.1.118
GROVER, 192.168.3.55
SNAKE EYES,
COBRA COMANDER, 192.168.3.3.
STORM SHADOW, 192.168.3.36
LADY JAYE, 192.168.3.5
BARONESS, 192.168.3.8
DUKE, 192.168.1.177
DESTRO, 192.168.3.1
SCARLETT, 192.25.160.1
FLINT, 192.168.3.17
HAWK, 192.168.192.1
BIG CHUCK,
LITTLE JOHN, 192.168.144.13
COOL GHOUL, 192.168.3.6
ZARTAN, 192.168.192.7
MEGATRON, 192.168.14.3
STARSCREAM, 192.168.1.185
ICE CREAM, 192.168.33.3
ME GRIMLOCK, 192.168.5.101
JAZ, 192.168.3.54
OPTIMUS PRIME, 192.168.3.53
IRONHIDE, 192.168.1.116
SOUNDWAVE - THE BEST, 192.168.1.136
KUP, 192.168.192.8
SLUDGE, 192.168.1.184
LASERBEAK, 192.168.192.14
BUMBLEBEE, 192.168.144.3
GRAPPLE, 192.168.3.1
SMOKESCREEN, 192.168.3.45
RUMBLE, 192.168.14.7
RAVAGE, 192.168.5.99
MAGNUM PI, 192.168.3.30
A-TEAM, 192.168.144.17
MR T - I PITTY THE FOOL, 192.168.6.3.
TRAP - ITS A TRAP, 192.168.1.135
MACGYVER,
THE DUKES OF HAZZARD, 192.168.1.136
BOSS HOG,
TOUR OF DUTY, 192.168.3.56
VOLTRON, 192.168.1.19
TIMMY, 192.168.1.15
JIMMY, 192.168.3.4
MR-HANKEY, 192.168.3.5
CARTMAN, 192.168.5.44
KENNY, 192.168.3.3
STAN, 192.168.19.168
KYLE, 192.168.5.60
TOLKIEN,
CHEF, 192.168.3.99
LIAN-CARTMAN, 192.168.1.3
THE-SCARY-MONSTER, 192.168.1.100
BEBE, 192.168.1.156
SHARON-MARSH, 192.168.1.101
TOWELIE,
LINDA-STOTCH, 192.168.192.3
GARY, 192.168.1.3
MR GARRISON, 192.168.12.3
BONO, 192.168.1.16
WENDY TESTABURGER, 192.168.192.8
ANAKIN, 192.168.3.37
DARTH VADER, 192.168.1.115
LUKE, 192.168.3.38
OBI-WAN, 192.168.1.49
HAN SOLO, 192.168.35.22
SHEEV, 192.168.3.33
LEA, 192.168.1.117
YODA, 192.168.192.168
CHEWBACA, 192.168.5.41
BOBA FETT, 192.168.192.11
JENGO-FETT, 192.168.3.58
R2-D2, 192.168.144.11
C-3PO, 192.168.22.45
STORM TROOPER, 192.168.4.3
SNOW TROOPER, 192.168.69.101
CLONE TROOPER,
SUPER TROOPERS - LOL, 192.168.99.100
REY, 192.168.192.4
LANDO,
PADME, 192.168.88.88
KYLO REN, 192.168.1.194
MACE WINDU - LIVES, 192.168.192.5
QUI-GON JIN,
GIN AND JUICE, 192.168.1.189
ADMIRAL ACKBAR,
DARTH MAUL, 192.168.3.41
AHSOKA TANO, 192.168.77.78

Here is the actual Ansible Playbook “VMWARE_list_all_powered-on_vms.yml”


---
- hosts: localhost
  vars:
    vcenter_hostname: vcenter.domain.local
    vcenter_user: ansibleuser@DOMAIN.LOCAL
    vcenter_pass: !vault |
          $ANSIBLE_VAULT;1.1;AES256
    
    esxhost: 192.168.1.101
    name: "{{ vm_name }}"
    notes: Ansible Test
    dumpfacts: False

  tasks:
  - name: Gather all VMs information
    vmware_vm_info:
      hostname: '{{ vcenter_hostname }}'
      username: '{{ vcenter_user }}'
      password: '{{ vcenter_pass }}'
      validate_certs: no
    register: all_vm_info
    delegate_to: localhost


  - name: Gather a list of all powered on VMs
    set_fact:
      on_vm: "{{ all_vm_info.virtual_machines | json_query(query) }}"
    vars:
      query: "[?power_state=='poweredOn']"
    register: jsoncontent

  - name: Gather a list of all powered on VM names
    debug: msg="{{ on_vm | json_query(jmesquery) }}"
    vars:
      jmesquery: "[*].{guest_name: guest_name, ip_address: ip_address}"


TACACSGUI and Aruba AirWave TACACS

TACACSGUI (https://tacacsgui.com) is a free opensource TACACS server with a robust interface. When setting up TACACS in an Aruba Airwave and if you are using TACACSGUI (even the cisco equivalent) normal TACACS users are unable to logon. I did not have any issues like this when I setup TACACS for Aruba Mobility Masters and Mobility Controllers. I only ran into this with Aruba AirWave. AirWave’s interface looked allot older in design when compared to Mobility Master and Mobility Controllers. You need to create an admin role (or service in TACACSGUI) for the user to authenticate as AirWave is expecting a specific role.

Currently as of 1/10/22, TACACSGUI does not have a predefined service or role for Aruba AirWave, so I manually need to create one.

In TACACSGUI, goto Access Control, then Services. Under Services, click the Add button to define a new service.

For the service name, you can call it whatever you like, I called mine: Aruba-Airwave-access. I then selected “Only manual configuration”. In the manual configuration enter the following:

service = AMP { set role = Admin }

Once that is entered, you can save the Service (role). Next you need to add the new service (role) to a user in TACACSGUI

Below is where you add the newly created service (role) to the user. A user can have more than one service (role) in TACACSGUI. In the picture below, this user has only one service associated with them. For example, my user has services (roles) for Airwave, Juniper, and Cisco shell access with a specific privilege level specified.

Here is a picture showing the TACACS configuration settings in Aruba Airwave:

Using Ansible to track down exact port on switch a mac address is connected to

 I have a group of switches that users keep asking me to locate mac address for in order to trace down the exact port and edit that port to assign it in new VLAN. Rather than logging into each switch and tracking down the switch where the mac resides, I created a basic ansible playbook to help me with this. This has been a huge time saver. Hopefully this can help someone else (It helps if your switch ports have descriptions):

---

- name: Find mac address in sec-switches

  hosts: sec-switch

  gather_facts: false

  connection: local

  vars_prompt:

     - name: mac

       prompt: What is the mac address?

       private: no

  tasks:

    -

      name: debugging

      ansible.builtin.debug:

        msg: 'Searching for {{ mac }}'

    -

      name: search

      ios_command:

        commands:

          - "show mac address-table | include {{ mac }}"

      register: printout

    - set_fact:

        intf: |

          {{printout.stdout_lines[0] |

            map('regex_replace','^(?:[^ ]*\ ){12}([^ ]*)') |

            list }}

    -

      name: show int desc

      ios_command:

        commands:

          - "sh interfaces description | inc {{ intf[0].strip() }}"

      register: printout2

    - name: View output

      debug:

        var: printout2

<Snippet of output>

ok: [switch9] => {

    “printout2”: {

        “changed”: false,

        “failed”: false,

        “stdout”: [

            “Gi1/0/42                       up             up       SEG 12 “

        ],

        “stdout_lines”: [

            [

                “Gi1/0/42                       up             up      SEG1 2”

            ]

        ]

    }

}

ok: [switch20] => {

    “printout2”: {

        “changed”: false,

        “failed”: false,

        “stdout”: [

            “Gi1/0/25                       up             up       UPLINK”

        ],

        “stdout_lines”: [

            [

                “Gi1/0/25                       up             up       UPLINK”

Special shout out to rajthecomputerguy, who helped me by suggesting:

Use strip() method to get rid of whitespace debug: var: intf[0].strip()