-
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 8 replies
-
|
hey, pretty common gotcha with the airflow helm chart. looking at your screenshots a couple things stand out: from the pod output, every single pod is still running 1. use these are the top level values that propagate to ALL components including KubernetesExecutor worker pods, scheduler, webserver, triggerer, everything: helm upgrade --install airflow apache-airflow/airflow \
--set defaultAirflowRepository=your-registry/your-airflow \
--set defaultAirflowTag=your-tagsetting 2. verify the variable substitution in your CI/CD from the screenshot it looks like you're using something like 3. check pullPolicy default is 4. verify after deploy # check what values helm actually stored
helm get values <release-name> -n <namespace>
# check what image the pods are running
kubectl get pods -n <namespace> -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[*].image}{"\n"}{end}'
# check the KubernetesExecutor pod template
kubectl get configmap -n <namespace> <release>-airflow-pod-template-file -o yamlthe configmap one is especially important for KubernetesExecutor - thats where worker pod specs come from. my bet is #2 - the CI variable substitution. hope that helps, lmk |
Beta Was this translation helpful? Give feedback.



Issue was I'm using a subchart so everything needs to be nested a further level.