This message was deleted.
# general
b
This message was deleted.
l
Hi @calm-oxygen-85457, this looks like it might be a bug with that specific Azure resources. Do you have a snippet of the Terraform code you’re using to create that resource? That would help us debug it.
c
Sure, below the main.tf :
resource "azurerm_kubernetes_cluster" "k8s" {
_name_                 = data.azurerm_resource_group.aks_rg.name
_location_             = data.azurerm_resource_group.aks_rg.location
_resource_group_name_  = data.azurerm_resource_group.aks_rg.name
_dns_prefix_           = data.azurerm_resource_group.aks_rg.name
_kubernetes_version_   = data.azurerm_kubernetes_service_versions.current.latest_version
_node_resource_group_  = format("%s-%s", data.azurerm_resource_group.aks_rg.name, "node")
_azure_policy_enabled_ = var.enable_azure_policy
_disk_encryption_set_id_ = local.os_disk_encryption_set_id
network_profile {
_network_plugin_ = "azure"
_network_policy_ = "azure"
} linux_profile {
_admin_username_ = "ubuntu"
ssh_key {
_key_data_ = data.azurerm_key_vault_secret.client_instance_name_cluster_key_public.value
} } auto_scaler_profile {
_balance_similar_node_groups_      = false
_max_graceful_termination_sec_     = "600"
_scale_down_delay_after_add_       = var.aks_scale_down_delay_after_add
_scale_down_delay_after_delete_    = "10s"
_scale_down_delay_after_failure_   = "3m"
_scale_down_unneeded_              = "10m"
_scale_down_unready_               = "20m"
_scale_down_utilization_threshold_ = var.aks_scale_down_utilization_threshold
_scan_interval_                    = "10s"
_skip_nodes_with_local_storage_    = false
_skip_nodes_with_system_pods_      = false
} default_node_pool {
_name_                 = var.node_pool_name
_vm_size_              = var.vm_size
_orchestrator_version_ = data.azurerm_kubernetes_service_versions.current.latest_version
_os_disk_type_         = var.os_disk_type
_os_disk_size_gb_      = var.os_disk_type == "Ephemeral" && var.os_disk_size_gb == 30 ? null : var.os_disk_size_gb
# The type of Default Node Pool for the Kubernetes Cluster must be VirtualMachineScaleSets to attach multiple node pools.
_type_                = "VirtualMachineScaleSets"
_enable_auto_scaling_ = var.enable_auto_scaling
_availability_zones_  = var.availability_zones
# Allow smooth autoscaling enabling from node count setting and agent_count default value
_node_count_     = var.enable_auto_scaling ? null : var.agent_count
_min_count_      = var.min_count
_max_count_      = var.max_count
_vnet_subnet_id_ = data.azurerm_subnet.aks_vnet_subnet.id
_max_pods_       = var.max_pods
_node_labels_ = {
nodetype = "default" }
_tags_ = local.aggregated_tags
dynamic "linux_os_config" { # If override_vm_max_count is true, inject linux_os_config configuration otherwise nothing is injected
_for_each_ = var.override_vm_max_count ? [""] : []
content { sysctl_config {
_vm_max_map_count_ = var.vm_max_map_count
} } } } lifecycle {
_ignore_changes_ = [role_based_access_control, windows_profile, default_node_pool[0].node_labels, default_node_pool[0].max_pods, default_node_pool[0].tags, default_node_pool[0].orchestrator_version, kubernetes_version]
} identity {
_type_ = "SystemAssigned"
} role_based_access_control {
_enabled_ = true
azure_active_directory {
_managed_                = true
_admin_group_object_ids_ = var.admin_group_object_ids
} }
_tags_ = local.aggregated_tags
}
l
Thanks, I’ll see if I can replicate.
c
I have the same behavior with another smaller module : main.tf
locals {
_aggregated_tags_ = merge(var.tags, tomap({ "InstanceName" : format("k8saas-%s", var.client_instance_name) }))
} resource "azurerm_log_analytics_workspace" "dedicated" {
_name_                = format("k8saas-%s", var.client_instance_name)
_location_            = data.azurerm_resource_group.aks_rg.location
_resource_group_name_ = data.azurerm_resource_group.aks_rg.name
_sku_                 = var.sku
_retention_in_days_   = var.retention_in_days
_tags_                = local.aggregated_tags
_daily_quota_gb_      = var.daily_quota_gb
} variables.tf
variable "client_instance_name" {
_type_        = string
_description_ = "Name of the cluster"
} variable "retention_in_days" {
_type_        = number
_default_     = 30
_description_ = "The workspace data retention in days. Possible values are either 7 (Free Tier only) or range between 30 and 730."
} variable "sku" {
_type_        = string
_default_     = "PerGB2018"
_description_ = "Optional) Specifies the Sku of the Log Analytics Workspace. Possible values are Free, PerNode, Premium, Standard, Standalone, Unlimited, and PerGB2018 (new Sku as of 2018-04-03)."
} variable "tags" {
_type_        = map(any)
_default_     = {}
_description_ = "tags following k8saas convention"
} variable "daily_quota_gb" {
_type_        = string
_default_     = null
_description_ = "The workspace daily quota for ingestion in GB. Defaults to null (unlimited) if omitted."
} data.tf
data "azurerm_resource_group" "aks_rg" {
_name_ = format("k8saas-%s", var.client_instance_name)
}
l
Can you run with
--log-level=debug
and see if that gives any clues?
@calm-oxygen-85457 my guess is that using the HCL parsing option it’s unable to find the region. Out of interest if you change
_location_            = data.azurerm_resource_group.aks_rg.location
to specify the actual region like
location = eastus
does it work?
c
Hi @little-author-61621, Yes, we use terragrunt in addition to terraform and some variables are in hcl file. It works If I change the location by eastus, thanks !
l
Great, yeah our HCL parsing method won’t be able to pull the data to get
data.azurerm_resource_group.aks_rg.location
, so I wonder how we can find this otherwise. Maybe we could allow a configuration to set the default region. The other option is to generate a Terraform plan JSON and run that through infracost, since that will pull the correct region.