Part 1: KE/TZ/UG — Build Image & Push to Registry

Overview

Build the image, configure the Docker host, and push to the target registry for the selected environment.

  1. Connect to CyberArk.
  2. Connect to Host cont16059.
  3. cd IMAGE
  4. rm -rf <SERVICE-NAME-SHARED-BY-DEV-TEAM>
  5. Clone the Repository as shared by Dev/Admin:
    git clone git@extio_git_srv1/<SERVICE-NAME>.git # OR git clone git@extio_git_srv2/<SERVICE-NAME>.git cd <MODULE-NAME> # Example: cd az-some-module
    Clone issues (SSH/Network)
    ! Clone Issues? If “Host key verification failed”, confirm SSH keys and host trust; if “Connection timed out”, check firewall or use the alternate host; use ssh -v git@extio_git_srv1 for verbose debugging.
  6. Switch to the latest branch:
    git branch --all git checkout <LATEST-BRANCH-NAME>
    ! If the branch is not found, confirm the exact name with Dev/Admin and run git fetch --all.
  7. Export the Docker Registry Host (see Table A):
    export DOCKER_HOST=tcp://<DOCKER-HOST-IP>:2375
    Environment variable not set?
    i Verify with echo $DOCKER_HOST; if empty, re-run export and add to shell profile for persistence (e.g., ~/.bashrc).
  8. Run Maven build & push:
    mvn clean package fabric8:build fabric8:push -DskipTests
    ! Build failures may indicate network/proxy issues, Docker daemon problems (docker info), disk space (df -h), or incompatible Maven/Java versions.
  9. If it shows BUILD SUCCESS, proceed to Part 2; otherwise, escalate to Dev/Admin with logs.

Part 2: KE/TZ/UG — Deploy to K8S Cluster

Overview

Update image tags and environment configuration, then apply Kubernetes manifests to the correct cluster context.

  1. Connect to Host cont160150.
  2. Navigate to deployment specs:
    cd /home/bastion/DEPLOYMENT-SPECS/<ENVIRONMENT>/<COUNTRY> # Example: cd /home/bastion/DEPLOYMENT-SPECS/UAT/KE
    Cannot find directory?
    ! Confirm environment and country folders, check permissions (ls -ld /home/bastion/DEPLOYMENT-SPECS/*), and verify the correct host.
  3. Change to module directory:
    cd <MODULE-NAME> # Example: cd az-some-module
    ! If missing, ensure the module was cloned correctly in Part 1 and verify path spelling/casing.
  4. Edit the deployment YAML to update the image version:
    vi deploy.yaml # OR vi production-deploy.yaml
    Example line inside YAML:
    image: registry.apizone.io:3000/az-some-module:1.0.6-RELEASE
    Image version errors?
    ! Confirm the exact tag from build logs; mistyped tags cause pull failures; optionally verify with docker pull <image>:<tag>.
  5. Add or update required ENV variables:
    - name: SOME_VARIABLE value: "someValue"
    ! Missing or incorrect ENV variables commonly cause crashes or misconfiguration; confirm with Dev/Admin.
  6. Source environment (see Table B):
    source ke-msb-uat
    Source file fails?
    ! Verify file existence/permissions (ls -l ke-msb-uat) and correct for syntax errors inside the source file.
  7. Deploy to K8S:
    kubectl apply -f <DEPLOYMENT-YAML-FILE> # Example: kubectl apply -f deploy.yaml
    ! If deployment fails, validate YAML syntax with kubectl apply --dry-run=client, check context (kubectl config current-context), and ensure RBAC permissions.
  8. Check Pod status and logs:
    pods | grep <MODULE-NAME> n-logs 100 <POD-NAME> desc <POD-NAME> | grep -i image
    Pods not running or crashing?
    ! Use kubectl describe pod <POD-NAME> and kubectl logs <POD-NAME>; common issues include image pulls, env vars, resource limits, and volumes.

SWIFT UAT Deployment

i This follows the standard two-part process but uses a specific Docker Host and deployment path.

Part 1: Build Image & Push to Registry

  1. Follow the same KE/TZ/UG build steps on host cont16059.
  2. Before running the Maven build, set the DOCKER_HOST for SWIFT UAT:
    export DOCKER_HOST=tcp://10.137.160.187:2375
  3. Proceed with the mvn clean package … build and push.

Part 2: Deploy to K8S Cluster

  1. Connect to Host cont160150.
  2. Navigate to SWIFT UAT deployment directory:
    cd /home/bastion/DEPLOYMENT-SPECS/UAT/SWIFT/<Module_Directory>
  3. Update YAML, source the environment, and apply with kubectl as in Part 2.

SWIFT PROD Deployment

! The production deployment for SWIFT follows a manual image transfer process across multiple hosts.

Part 1: Build, Save, and Transfer Image

Complete the build, then securely transfer the image tarball to the production registry host for load and push.

  1. On host cont16059: set Docker Host to the registry.
    export DOCKER_HOST=tcp://10.137.160.187:2375
  2. On host cont16059: save the Docker image to a tar file.
    docker save -o /tmp/image-name.tar image:tag
  3. From local machine: copy image from cont16059 to local Downloads.
    scp cont16059:/tmp/image-name.tar Downloads/
  4. From local machine: copy to intermediate host cont17150.
    scp Downloads/image-name.tar cont17150:/tmp/
  5. On host cont17150: transfer to production registry 10.0.1.100.
    # Log in to cont17150 su - extio_user # Pass: T@er!shn2fed9 sudo su - # Pass: T@er!shn2fed9 su - apizone scp /tmp/image-name.tar 10.0.1.100:/tmp/
  6. On production registry host 10.0.1.100: load and push.
    ssh 10.0.1.100 sudo su - cd /tmp docker load -i /tmp/image-name.tar docker push image:version

Part 2: Deploy to K8S Cluster

  1. Connect to Host cont160150.
  2. Navigate to SWIFT PROD deployment directory:
    cd /home/bastion/DEPLOYMENT-SPECS/PROD/SWIFT
  3. Update production-deploy.yaml, source correct env (e.g., source swift-msb-prod), and apply.

Troubleshooting & Common Issues

Common issues
  • Image Not Pushed: Verify Docker login, correct DOCKER_HOST, and registry network access; check docker login and docker info.
  • Git Clone Fails: Try alternate host extio_git_srv2, validate SSH keys, and review firewall rules.
  • Access Denied: Ensure CyberArk session is active and SSH keys/tokens are valid; re-authenticate if expired.
Deployment and cluster issues
  • K8S Deployment Fails: Validate YAML with kubectl apply --dry-run=client -f <file> and confirm context/permissions with kubectl config current-context.
  • YAML File Errors: Use yamllint or a YAML validator before applying to the cluster.
  • Wrong Image Version: Ensure the tag in the deployment YAML exactly matches the build logs to prevent pull errors.
Pod runtime issues
  • Pods CrashLoopBackOff: kubectl describe pod for events and kubectl logs for app output; check env vars, image availability, resource limits, and volumes.

i For unresolved issues, collect exact error messages, relevant YAML, and recent logs before escalation to Dev/Admin.

Environment & Cluster Mappings

Table A: Docker Registry Host Mapping

Environment Country DOCKER_HOST
UAT Registry → MSB Kenya export DOCKER_HOST=tcp://10.137.160.146:2375
UAT Registry → MSB Tanzania export DOCKER_HOST=tcp://10.137.160.146:2375
UAT Registry → MSB Uganda export DOCKER_HOST=tcp://10.137.160.146:2375
Production Registry → MSB Kenya export DOCKER_HOST=tcp://10.137.171.3:2375
Production Registry → MSB Tanzania export DOCKER_HOST=tcp://10.137.129.55:2375
Production Registry → MSB Uganda export DOCKER_HOST=tcp://10.137.129.55:2375

Table B: Cluster Environment Mapping

Environment Country Cluster Identifier
UAT → MSBKenyake-msb-uat
UAT → MSBTanzaniatz-msb-uat
UAT → MSBUgandaug-msb-uat
Production → MSBKenyake-msb-prod
Production → MSBTanzaniatz-msb-prod
Production → MSBUgandaug-msb-prod