Thursday, March 30, 2023

Azure Bicep complete deployment Project 1

 

Introduction

 

The Bicep deployment includes separate modules for defining naming conventions. It also utilizes parameters for different environments, allowing for the flexibility to change subscriptions during deployment for production, QA, and staging. The deployment selects the application to be deployed, and the virtual machine is generated based on the application name, with a corresponding resource group created. The idempotency feature ignores common resources like shared services if they already exist. Additionally, there is an inline PowerShell deployment example to check if the resource already exists. The deployment also uses nested group resources and nested output, as well as the environment. The Bicep configuration file includes restrictions for linting and reporting during code build.

Contents

Introduction. 1

The Bicep deployment includes separate modules for defining naming conventions. It also utilizes parameters for different environments, allowing for the flexibility to change subscriptions during deployment for production, QA, and staging. The deployment selects the application to be deployed, and the virtual machine is generated based on the application name, with a corresponding resource group created. The idempotency feature ignores common resources like shared services if they already exist. Additionally, there is an inline PowerShell deployment example to check if the resource already exists. The deployment also uses nested group resources and nested output, as well as the environment. The Bicep configuration file includes restrictions for linting and reporting during code build.<

Parameters. 1

Main Bicep File. 5

Naming Convention Module. 8

Core Resource group module -. 10

Storage Module -. 11

Network Module -. 13

VM Compute Module. 19

Powershell Inline deployment Scripts -. 28

SharedServcices. 30

Log Analytics –. 31

 

 




Parameters


In this section of the blog post, I discuss the bicep that I authored to deploy Azure resources using a standard naming convention. I incorporated various functions and tools that I learned from Reactor, Learn Live, and GitHub. Although the bicep manifest may appear complex based on the size of the environment, it serves the purpose of using different functions and tools to avoid repetitive use of the same function. This results in a larger unified module and parameter, but it will reduce the code complexity in future modules when I use a registry to store the composite module and abstract it from the parameter and deployment module.

 

I pass parameters to my bicep module to build naming conventions based on application type, resource type, and index value. This allows me to flexibly choose the Vnet, subnet underneath the Vnet, number of VMs, and storage. The app role and application name aid in building the naming convention to deploy the VM based on the app function. In future modules, I plan to have a separate deploy module based on app function. Currently, this is how the production parameter file appears.


{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "department": {
      "value": "IT"
    },

  "appRoleIndex": {
    "value":  3
  },
 
  "saAccountCounts": {
    "value": [
      1
    ]
  },

    "appRole": {
      "value": [
          {
        "Name": "Applicatoin Server",
        "Shortname": "ap"
      },
      {
      "Name": "Active Directory",
        "Shortname": "dc"
  },
  {
      "Name": "Tool server",
      "Shortname": "tool"
  },
  {
       "Name": "dhcp server",
       "Shortname": "dhcp"
    }
  ]
  },
    "env": {
      "value": "prod"
            },
 "virtualNetworks": {
        "value": [
          {
             "name":  "hubVnet",
            "addressPrefixes" : ["10.10.0.0/18"],
              "subnets": [
                {
                  "name": "CoreSubnet",
                 "addressPrefix": "10.10.1.0/24"
               },
              {
               "name": "ToolSubnet",
                "addressPrefix": "10.10.2.0/24"
              },
              {
                "name": "DirectoryServiceSubnet",
                 "addressPrefix": "10.10.3.0/24"
               }
                  ]
            },
     
     
      {
        "name":  "AppVnet",
        "addressPrefixes" : ["10.13.0.0/18"],
        "subnets": [
              {
                "name": "AppSubnet",
               "addressPrefix": "10.13.1.0/24"
                },
             {
               "name": "ApptoolSubnet",
               "addressPrefix": "10.13.2.0/24"
              },
              {
              "name": "AppDirServiceSubnet",
              "addressPrefix": "10.13.3.0/24"
            }
                       ]
         },
       {
      "name":  "dbVnet",
      "addressPrefixes" : ["10.15.0.0/18"],
      "subnets": [
              {
                 "name": "dbSubnet",
               "addressPrefix": "10.15.1.0/24"
              },
             {
             "name": "dbToolSubnet",
             "addressPrefix": "10.15.2.0/24"
             }
           
                      ]
      },
      {
    "name":  "wvdVnet",
    "addressPrefixes" : ["10.17.0.0/18"],
    "subnets": [
             {
               "name": "wvdSubnet",
             "addressPrefix": "10.17.1.0/24"
             },
            {
             "name": "wvdToolSubnet",
           "addressPrefix": "10.17.2.0/24"
             }
           ]
       }]
          },
 
    "locationList": {
      "value":{
          "westus2": "azw2",
          "eastus": "aze"
      }
    },
 
   "dnsServers": {
      "value": [
        "1.1.1.1",
        "4.4.4.4"
      ]
    }  
         }
        }

To create the parameter file, I had to be very careful in selecting the right objects and constructs, as I had to repeat the deployment process multiple times to evaluate how resources are generated based on the object versus array construct of the parameter. Choosing the correct braces and brackets to build objects with [ and } while building arrays within objects or arrays inside objects was crucial, especially when dealing with VNet and subnet constructs, which can become complex during deployment and may produce errors and output. Adding a network deployment module later in the process may make more sense.

The parameter file starts with a Schema definition, which is required to build the JSON param file. I created a similar construct for the DEV and staging environments and referred to the respective files during deployment.

Let's break down the contents of the parameter file:

 

1.       AppIndex is an integer parameter that serves as a pointer to the appRole objects. It then manifests the applicationname and shortname as the value runs through the array defined in the application role.

2.       saACcountsCount allows us to define the value for the number of storage accounts that need to be deployed. The value can be overwritten by the value specified in the PowerShell splat expression, AZ cli, and YAML construct during the deployment process. The same applies to AppIndex.

3.       Based on the application role, the assigned application short name is directed to the deployment parameter file from the main to naming to resource module.

4.       I have four VNets: hub, app, db, and WVD. I configured three subnets for hub, three for app, two subnets for db, and two for WVD. Of course, the planning is not based on production deployment, so the selection of CIDR block and subnet under a VNet is random. However, in testing and training, I needed to keep it random to ensure it yields the exact count of subnets as the loop constructs the network environment. It is more related to validation of tools and functions than logic. However, for future modules, the network module would be a one-time construct and should remain as a composite module in a container registry with RBAC policy, allowing access only to the author but not the deployer.

5.       I have only added two locations for testing, but it could extend to more locations based on the size of deployment and DR planning. Based on the location, it also picks up the corresponding key-value pair for the location short, which again has its significance for naming module and resource names. Additionally, I am using two more parameters for NSG rules and AZPrefixes. For this module I am using single NSG rule for all network, but parameters context could be rebuilt to separate files or single file with separate object and arrays for multiple NSGs.

Additionally, I am using two more parameters for NSG rules and AZPrefixes. For this module, I am using a single NSG rule for all networks, but the parameter context could be rebuilt into separate files or a single file with separate objects and arrays for multiple NSGs.

AZPrefixes uses the naming convention suffix for various Azure services as defined on the Azure site. I also have a reference to GitHub open public resources that allows deploying web app tool, which helps us choose the correct naming and determine the correct suffix. Anyways, this has all prefixes, and the function for naming convention will pick up the correct suffix based on the service we deploy. I am using loadJSONcontext for one parameter and loadjsontest for another one. Both functions do the same job, but the objective was to test both tools. So, unlike the main parameters file, which changes based on the deployment and constructs based on the deployment parameters, Azprefix and NSG parse based on the loadJSONtest, and the selection depends on the string function and the service being deployed.

Main Bicep File

The main.bicep file contains all the parameters and decorators needed to capture from deployment parameters, which change based on the environment, application, etc. After that, it builds a NeuCoreRG resource group, which is necessary to build the

 

/ Meaningful vairable generation that is applied with the if statement for Enabling shared services.
var EnableSSResouce  = env == 'prod'



 @description('Resouce tag that would be passed for other resouce modules')
param tagValues object = {
  createdBy: 'prasant.chettri@xxxx.com'  //if az cli then it is deployed from the pipeline
  environment: env
  deploymentDate: currentDate
  product: appRoleName
 }

 @description('Existing resource called to build the naming prefix before any other resource gets deployed')
resource coreResourceGroup 'Microsoft.Resources/resourceGroups@2022-09-01'= { // existing = {
  name: 'neuCoreRG01'
 location: location
  }
  @description('Output valude of the base RG')
output coreRG string = coreResourceGroup.id

@description('NamingConvention MODULE BLOCK')
module namingConvention './modules/namingConvention.bicep' = {
 name: '${env}-deployNaming'
 scope: coreResourceGroup
 params: {
   department: department
   environment: env
   // appRoleName: appRole[1].Name
   appRoleShortName: appShortName
   // location : locationList[locationIndex].location  *** no need for location as location shortname is used to generate name
   locationShortName: locationShortName// 0 west2, 1 east, 2 westus, 3 central, 4 west3
   }
}


output storageAccountname string = namingConvention.outputs.outputObjects.saAccountNamePlaceHolder

At the run time it will call namingConvention.bicep

// The name of the team that will be responsible for the resource.
@maxLength(8)
@description('Deparment name that gets passed from parameter file during the deployment')
param department string
@description('Generate current date for useful suffix and deployment')
param currentDate string = utcNow('yyyy-MM-dd')
// The environment that the resource is for. Accepted values are defined to ensure consistency.
@description('Environment name that gets passed from parameter file during the deployment')
param environment string
@description('ApproleShotname from the object inside the array of application name and applicaton short name from main template')
param appRoleShortName string
//param appRoleName string
param locationShortName string
param index int = length(locationShortName)
// ****This would be more appropriate prefix for production env to extract only single letter out of envrironment instead of three letters to create prefix
// param locationprefix = substring(environment,0,1)

// The function/goal of the resource, for instance the name of an application it supports


// An index number. This enables you to have some sort of versioning or to create redundancy
// param index int

// First, we create shorter versions of the application role and the department name.
// This is used for resources with a limited length to the name.
// There is a risk to doing at this way, as results might be non-desirable.
// An alternative might be to have these values be a parameter
@description('Azure naming prefixes')
var azNamePrefixes = loadJsonContent('./Parameters/AzPrefixes.json')
// We only need the first three letter of the environment, so we substract it.
//var servicePrefix = azNamePrefixes.storageAccountPrefix.name
// var appRole = environmentInfo.parameters.appRole.value
// var environmentLetter = substring(environment,0,2)

// This line constructs the resource name. It uses [PC] for the resource type abbreviation.
// var resourceNamePlaceHolder = '${department}-${environment}-${appRoleShortName}${locationShortName}-[PC]' //-${padLeft(index,2,'0')}'
// This part can be replaced in the final template
// This line creates a short version for resources with a max name length of 24

// Storage accounts have specific limitations. The correct convention is created here. Convets name to lower and limits the length to 20.
// Therefore, we could have flexibily to add 2 - 3 leeter suffix during resource deployment
// var restrictedNamePlaceholder = take(toLower('sharedservices001'),12)

//var saAccountNamePlaceHolder = take(toLower('${department}${environment}${appRoleShortName}${azNamePrefixes.parameters.storageAccountPrefix}${padLeft(index,2,'0')}'),20)


// Configuration set object variable to generate collective output keeps the code shorter
var outputObjects = {
  // This line constructs the none restricted resource naming convention. It uses [PC] for the resource type abbreviation.
  resourceNamePlaceHolder : '${department}-${environment}-${appRoleShortName}${locationShortName}-[PC]'
  // This line constructs the restricted resource naming convention.
  restrictedNameSSPlaceholder : take(toLower('sharedservices001'),12)
    // This line constructs the restricted resource naming convention for the storage account
   saAccountNamePlaceHolder : take(toLower('relli${department}${environment}${appRoleShortName}${azNamePrefixes.parameters.storageAccountPrefix}${padLeft(index,2,'0')}'),20)
   currentdate : currentDate
   restrictedNamePlaceholder : take(toLower('${department}${environment}${appRoleShortName}'),11)
}
// Ths line creates  object output to send collective naming convention outputs for use in other modules
output outputObjects object = outputObjects
// utput locShortName string = locationShortName





 

Naming Convention Module

I am transferring a couple of parameters from the primary bicep module to the naming convention module, including the short location name and the short application name. As for Azure services, I'm reading through a JSON file and storing it temporarily in a variable, and then iterating over it. I'm creating three naming placeholders: one for Storage account naming convention, a RestrictedName placeholder for shared services, a restricted naming placeholder for other services with naming restrictions, and a naming placeholder for other services that allow special characters and longer names.

resourceNamePlaceHolder : '${department}-${environment}-${appRoleShortName}${locationShortName}-[PC]'

 

I am currently using "PC" as a temporary placeholder, but in an actual deployment environment, it should be replaced with "AZS" for Azure services. The suffix for the service is extracted from the JSON file and then manipulated using functions such as "replace" and "length".

saAccountNamePlaceHolder : take(toLower('relli${department}${environment}${appRoleShortName}${azNamePrefixes.parameters.storageAccountPrefix}${padLeft(index,2,'0')}'),20)

I am using the "take" function to limit the length of the storage account name to 20 characters, and then applying the "tolower" function to ensure that there are no uppercase letters in the name that could potentially cause deployment errors.

restrictedNamePlaceholder : take(toLower('${department}${environment}${appRoleShortName}'),11)

To handle other services with restricted naming conventions, I will use a similar approach. I will use the placeholder "PC" (which would be replaced with the suffix for the corresponding service) and restrict the length using the "take" function to ensure it is under the character limit. Additionally, I will use the "tolower" function to avoid any uppercase characters causing deployment errors.

 

All of these naming variables will be enclosed within the Configuration map variable outputObjects, which will make it easier to generate outputs based on the configmap. This will be a component of a single object variable called "outputObjects". While this approach may not automatically cache and populate the suffixes for the variable in other modules, it is easy to maintain and should not be a major issue since it has a standard structure for assignment to variables in other modules.

/// Configuration set object variable to generate collective output keeps the code shorter

var outputObjects = {
  // This line constructs the none restricted resource naming convention. It uses [PC] for the resource type abbreviation.
  resourceNamePlaceHolder : '${department}-${environment}-${appRoleShortName}${locationShortName}-[PC]'
  // This line constructs the restricted resource naming convention.
  restrictedNameSSPlaceholder : take(toLower('sharedservices001'),12)
    // This line constructs the restricted resource naming convention for the storage account
   saAccountNamePlaceHolder : take(toLower('relli${department}${environment}${appRoleShortName}${azNamePrefixes.parameters.storageAccountPrefix}${padLeft(index,2,'0')}'),20)
   currentdate : currentDate
   restrictedNamePlaceholder : take(toLower('${department}${environment}${appRoleShortName}'),11)
}
// Ths line creates  object output to send collective naming convention outputs for use in other modules
output outputObjects object = outputObjects
// utput locShortName string = locationShortName

In the primary module, I'm using a symbolic name for the resource group, and this is how I'm passing the results from the naming convention module to other sub-modules. Additionally, I'm demonstrating how I'm replacing the initial placeholder with the real service name extracted from the JSON content using the replace function.


@description('RG deployment module')
module demoResouceGroup 'Modules/pcResouceGroup.bicep' = {
  name: 'RGDeployment-${deploymentsuffix}'
                                   
  params:{    
    location : location

    demoRgName : 'demoRG-${replace(namingConvention.outputs.outputObjects.resourceNamePlaceHolder,'[PC]',sharedNamePrefixes.parameters.resourceGroupPrefix)}' // cannot be the value in the primary block the variable has not been generated
    saNamingPrefix: namingConvention.outputs.outputObjects.saAccountNamePlaceHolder
    tags: tagValues
    virtualNetworks: virtualNetworks
    dnsServers: dnsServers
    environment: env
    saAccountCounts : saAccountCounts  
    resourceNamingPlacHolder: namingConvention.outputs.outputObjects.resourceNamePlaceHolder
     restrictedNamingPlaceHolder : namingConvention.outputs.outputObjects.restrictedNamePlaceholder
   
  vmCountIndex : vmCountIndex
   adminUsername: adminUsername
    adminPassword: adminPassword

  }
 }

 

Similarly, I am passing output from naming convention to shareservices resource group module which is used only for building shared services.

@@description('module to create shared resources')

module sharedModule 'Modules/pcShared.bicep' = if(EnableSSResouce) {
  scope: coreResourceGroup
  name: 'Shared-${deploymentsuffix}'
  params: {
    location: location
     environment : env
    tags: tagValues
    restrictedNamingPlaceHolder : namingConvention.outputs.outputObjects.restrictedNameSSPlaceholder
    namesuffix : 'lnss'

  }
}

 

Core Resource group module -

The bicep module for deploying core resources includes several features such as utilizing nested loops to generate reusable output, deploying multiple storage accounts, disks, NICs, and VMs based on specified parameters. The module also contains parameters and decorators, as well as variables for loading JSON and replacing naming conventions. These variables are similar to the logic applied in the main module after values are collected from the namingConvention module. The DemoResourceGroup module creates a resource group using a specific naming convention that adds "RG-" as a prefix and "main01" as a suffix to the deployment name. The following code snippet shows the ResourceGroup deployment block within the DemoResourceGroup module.

@description('passes through all JSON contect and assigns to a variable')
var resourceNamePrefix =  loadJsonContent('./Parameters/AzPrefixes.json')
//Replacing placeholder intial with the correct service name
var nsgNamingPlaceHolder = replace(resourceNamingPlacHolder,'[PC]', resourceNamePrefix.parameters.NetworSecurityGroup)
var vmNamePrefix = take(replace(resourceNamingPlacHolder,'[PC]', resourceNamePrefix.parameters.virtualmachinePrefix),12)

resource pcResourceGroup 'Microsoft.Resources/resourceGroups@2022-09-01' = {
  name: 'RG-${demoRgName}-main01'
  location: location
  tags:tags
}

 

After deploying the resource group, the storage account deployment module is executed using a for loop that iterates based on the number of storage accounts specified in the JSON file or provided during deployment.

@description('Creates number of storage account specified during deployment')
module StorageAccount './pcStorageAccount.bicep' = [ for i in range(0, saAccountCounts): {
 scope: pcResourceGroup
name: 'depoy-${saNamingPrefix}deploymentdate${i}'


params:{
  location: location
  saAccountName: '${saNamingPrefix}psa0${i}'
  tags: tags
}
}]

 

Storage Module -

Skipping ahead from the resource block, let's take a look at how the storage resource bicep is constructed. Within the storage account resource block, a blog and file share are created for each account, with varying naming suffixes for each service and account as the loop progresses. The retention and deletion retention values are standardized based on the resource definition provided in Microsoft documentation, and it is not the intention of this blog to cover each item in the resource definition in detail, as this is already covered extensively in the Microsoft learn module and documentation. However, if necessary, the properties in the storage account can be parameterized using if statements or deployment criteria to provide greater flexibility in changing retention and other values.

pparam location string

param saAccountName string
param tags object


// param storageSuffix string

// var storageAccountname  = '${namingConvention}${storageSuffix}'

@description('creates the number of SA account based on the saindex value specified in the deployment parameter, or default')
resource pcStroageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
  name: saAccountName
  location: location
  tags : tags
  sku: {
    name: 'Standard_LRS'
  }
  kind: 'StorageV2'
  properties:{
    accessTier:'Hot'
    allowBlobPublicAccess:false
    publicNetworkAccess: 'Disabled'
    minimumTlsVersion: 'TLS1_2'
    supportsHttpsTrafficOnly: true
    networkAcls:{
      bypass:'AzureServices'
      defaultAction: 'Deny'
      //ipRules:
    }
  }
}
@description('creates one blob storage account per SA account called and looped from the RG module')
resource blobService 'Microsoft.Storage/storageAccounts/blobServices@2022-09-01' = {
 name: 'default'
 parent: pcStroageAccount
 
  properties: {
    restorePolicy:{
      enabled:false
    }
    deleteRetentionPolicy: {
      enabled: true
      days:7
    }
    containerDeleteRetentionPolicy:{
      enabled:true
      days:7
    }
    changeFeed:{
      enabled:true
      retentionInDays:5
    }
    isVersioningEnabled:true
    }

}
@description('creates one File share storage per SA account called and looped from the RG module')
resource fileServices 'Microsoft.Storage/storageAccounts/fileServices@2022-09-01' = {
  parent:pcStroageAccount
  name:'default'
    properties: {
shareDeleteRetentionPolicy:{
 enabled:true
  days:7
}
    }
dependsOn:[
  blobService
]
    }

 

Network Module -

As the storage account deployment is relatively simple and has fewer parameters, let's move back to the resource module and examine the other resources. The virtual network module turned out to be more complex than I initially anticipated, as I needed to use a loop at the module level for the Vnet and subnet, and another loop at the resource definition level for the subnet level. It took several attempts to get the correct output, and the nested loop is not yet complete, as we also need to define an output loop at the resource level for the subnet level and correlate that with the Vnet level in the calling module to ensure correct output. I will provide a clearer explanation by laying out the code for both the calling module block and the resource block for the virtual network module in the ResourceGroup bicep file.

@


@description('virtual network module passes required parameter to generate resoure for vnets and subnets inside respecitve vent')
 module pcVirtualNetwork './pcVirtualNetwork.bicep' = [ for vnet in virtualNetworks: {
  scope: pcResourceGroup
 
  name: vnet.name
 params: {
   virtualNetworks: virtualNetworks
  vnet : vnet
addressPrefixes: vnet.addressPrefixes
location: location
  tags: tags
 nsgNamePrefix: nsgNamingPlaceHolder
 dnsServers: dnsServers

 subNets: [ for subnet in vnet.subnets: {
  name : subnet.name      
  addressPrefix: subnet.addressPrefix
 
 
  } ]
}
} ]


 

Virtual network block at resource bicep file along with NSG

@description('Tags to be applied to resources')
param tags object
@description('Azure location where resources are deployed')
param location string
@description('VNET address prefixes')
param addressPrefixes  array
param nsgNamePrefix string
param vnet object = virtualNetworks[0]
param virtualNetworks array ///added the param on 3/26/23
param dnsServers array
param subNets array


@description('load the content of of JSON parameter file to the variable')
var nsgSecurityRules  = json(loadTextContent('./Parameters/nsg-rules.json')).securityRules
var dnsServersvar  = {
dnsServers: array(dnsServers)
}

@description('creates NSG # of NSG rules based on the number of blocks created on the NSG param file/JSON varaiable')
resource pcnsg 'Microsoft.Network/networkSecurityGroups@2022-07-01' = {
  name: nsgNamePrefix
  location: location
  properties: {
    securityRules: nsgSecurityRules
  }
 
}
@description('creates vNet and subnets based the # block defined on the parameter file ')

resource pcVnet 'Microsoft.Network/virtualNetworks@2022-07-01' = { //[ for (vnet,idx) in virtualNetworks: {  //it was without loop on 2/26/23
  name: vnet.name //'${vnet}.${NetworkNamePrefix}01'
  location: location
  tags: tags
  properties: {
   
   addressSpace: {
    addressPrefixes: addressPrefixes
   }
   
  dhcpOptions: !empty(dnsServers) ? dnsServersvar  : null
   // ternary  If the dnsServers variable is not empty, the value of the dnsServers_var variable is assigned to the dhcpOptions field.
   subnets: [for subnet in vnet.subNets: {
       name : subnet.name //'${subnet.name}${environment}' //'${snetNamePrefix}${subnet.name}' //'${subNetName}-${subnet.name}'
  properties: {
    addressPrefix: subnet.addressPrefix //subnet.addressPrefix
        //Assigns private serviceEndpoints if the subnet serviceEndpoints states enabled or else null
  serviceEndpoints: contains(subnet, 'serviceEndpoints') ? subnet.serviceEndpoints : []
  delegations: contains(subnet, 'delegation') ? subnet.delegation : []
    networkSecurityGroup: {
      id: pcnsg.id
    location: location
    }
   //Assigns private endpoint pool if the subnet privateEndpointNetworkPolicies states enabled or else null
   privateEndpointNetworkPolicies: contains(subnet, 'privateEndpointNetworkPolicies') ? subnet.privateEndpointNetworkPolicies : null
   privateLinkServiceNetworkPolicies: contains(subnet, 'privateLinkServiceNetworkPolicies') ? subnet.privateLinkServiceNetworkPolicies : null
           
  }
          } ]

//        }
   
      }//]
    }


output subnetsall array = [for subnet in  vnet.subNets: subnet] // [for subnet in subNets: vnet.subnets]
 

 

As the virtual network module is triggered, it picks up the first Vnet parameter from the JSON parameter file, which is "hubnet". Within the module, it then picks up a nested loop for the first subnet, which is "coreSubnet". In the resource block, it creates the Vnet and generates three resources for the subnets: "coreSubnet", "hubtoolSubnet", and "hubdSSubnet", eventually generating output for the "subnetAll" array defined above at the deployment time for the "hubVnet" as follows:

[{"name":"CoreSubnet","addressPrefix":"10.10.1.0/24"},{"name":"ToolSubnet","addressPrefix":"10.10.2.0/24"},{"name":"DirectoryServiceSubnet","addressPrefix":"10.10.3.0/24"}].

 

I was not able to get the correct output by keeping the subnet loop only at the module declaration block or the resource definition block. Instead, I had to create a loop on both the Vnet and subnet modules and resources. Defining a loop on both the Vnet and subnet modules and resources created nine outputs instead of three. However, this combination worked perfectly, as it only built subnets at the resource definition loop and enabled the output array of "subnetAll" as follows:

[for subnet in vnet.subNets: subnet].

 

This produced the correct output that could be used to generate the respective network variables and output for other modules as it unfolds.

 

Now that we are back in the resource module to build on the Vnet module, but before that, we will cover some debatable options for manifesting network module output in different ways. Here is a long list of output options showing two to three different ways of manifesting Vnet/subnet output for use, including the call to the VM module.

var resourcegId = pcResourceGroup.id

var hubVnetId  = '${vnetResourcePrefix}${pcVirtualNetwork[0].name}/subnets/'
var appVnetId  = '${vnetResourcePrefix}${pcVirtualNetwork[1].name}/subnets/'
var dbVnetId  = '${vnetResourcePrefix}${pcVirtualNetwork[2].name}/subnets/'
var wvdVnetId  = '${vnetResourcePrefix}${pcVirtualNetwork[3].name}/subnets/'

var vnetResourcePrefix = '${resourcegId}/providers/Microsoft.Network/virtualNetworks/'

var coreSubnetName = pcVirtualNetwork[0].outputs.subnetsall[0].name
var coreSubnetId = '${hubVnetId}${coreSubnetName}'
var hubToolSubnetName = pcVirtualNetwork[0].outputs.subnetsall[1].name
var hubToolSubnetId = '${hubVnetId}${hubToolSubnetName}'
var hubDSSubnetName = pcVirtualNetwork[0].outputs.subnetsall[2].name
var hubDsSubnetId = '${hubVnetId}${hubDSSubnetName}'


var appSubnetName = pcVirtualNetwork[1].outputs.subnetsall[0].name
var appSubnetId = '${appVnetId}${appSubnetName}'
var appToolSubnetName = pcVirtualNetwork[1].outputs.subnetsall[1].name
var appToolSubnetId = '${appVnetId}${appToolSubnetName}'
var appDSSubnetName = pcVirtualNetwork[0].outputs.subnetsall[2].name
var appDsSubnetId = '${appVnetId}${appDSSubnetName}'

var dbSubnetName = pcVirtualNetwork[2].outputs.subnetsall[0].name
var dbSubnetId = '${dbVnetId}${dbSubnetName}'
var dbToolSubnetName = pcVirtualNetwork[2].outputs.subnetsall[1].name
var dbToolSubnetId = '${dbVnetId}${dbToolSubnetName}'

var wvdSubnetName = pcVirtualNetwork[3].outputs.subnetsall[0].name
var wvdSubnetId = '${wvdVnetId}${wvdSubnetName}'
var wvdToolSubnetName = pcVirtualNetwork[3].outputs.subnetsall[1].name
var wvdToolSubnetId = '${wvdVnetId}${wvdToolSubnetName}'




var ouputNetworkObjects = {
 
  coreSubnetName : coreSubnetName
  coreSubnetID : coreSubnetId

  hubToolSubnetName : hubToolSubnetName
  hubToolSubnetID : hubToolSubnetId
  hubDsSubnetName : hubDSSubnetName
  hubDsSubnetID : hubDsSubnetId

  appDsSubnetName : appSubnetName
  appDsSubnetId : appSubnetId

  appToolSubnetName : appToolSubnetName
  appToolSubnetId : appToolSubnetId

  appMonSubnetName : appDSSubnetName
  appMonSubnetId : appDsSubnetId

  dbSubnetName : dbSubnetName
  dbSubnetId :dbSubnetId

  dbToolSubnetName : dbToolSubnetName
  dbToolSubnetId : dbToolSubnetId

  wvdSubnetName : wvdSubnetId
  wvdSubnetId : wvdSubnetId

  wvdToolSubnetName : wvdToolSubnetName
  wvdToolSubnetId : wvdToolSubnetId
}

var wvdtags = ouputNetworkObjects.wvdSubnetId

module VMresource 'pcVirtualMachine.bicep' = [for i in range(0,vmCountIndex): {
scope: pcResourceGroup
 name: '${vmNamePrefix}${i}'

  params: {
    location: location
        tags : wvdtags
     pcnamingConvention: 'vm${restrictedNamingPlaceHolder}${i}'
   

    environmentType: environment
    adminUsername: adminUsername
    adminPassword: adminPassword

    coreSubnetName :   coreSubnetName //alternate option pcVirtualNetwork[0].outputs.subnetsall[0].name  // pcVirtualNetwork[0].outputs.subnetsall[0].name
    coreSubnetID: coreSubnetId // aternate option ${hubVnetId}${pcVirtualNetwork[0].outputs.subnetsall[0].name}'
   
       OSdiskname : '${restrictedNamingPlaceHolder}disk${i}'
       datadiskname:  '${restrictedNamingPlaceHolder}datDisk${i}'
   
      }
     
    }]
 
output coreRgname string = pcResourceGroup.name

 

To obtain the ID for the corresponding subnet to be used in the compute or web app module, I resorted to interpolating a string from the resource group ID, /vnets/vnetname, and /subnet/subnet, which is equivalent to the final subnet ID value. While it would be better to separate the compute module in a different composite module that relies on the network, for the purposes of testing the interpolation method, this approach works well. By passing the parameter with an array value for subnet ID and subnet name when calling the compute module, we can easily determine what the array value of the vnet and subnet represents based on the parameter file. For production use, however, I recommend a completely isolated composite module for the network, which I will build in the next repository version.

 

For now, I have added a variable and output directly to the resource module to show how it could be generated based on different needs to pass parameters with better meaning and understandability. However, this goes against the primary goal of keeping the code module simpler and reusable with a parameter file.

 

VM Compute Module

Moving on to the compute module, which is the VM deployment module, I have defined the resource block for the data disk resource for the production VM and also created the resource block for the VMNIC, which depends on it. The data disk resource is only applicable for the production VM, which is the only VM that receives the D series SKU. Additionally, the D series VMs are higher up and support auto-patching, while the staging and QA environments only consume B series VMs.

To condition a particular type of compute resource, I have used a configuration map, which I found to be a useful tooling. To make the code block shorter, we can define the configuration map in a separate Bicep file and output it to the VM. The configuration map has separate blocks for the HardwareProfile, StorageProfile, NetworkProfile, DiagnosticsProfile, and OSProfile, with the network and disk profiles depending on the ID for the network and disk resource block. The remaining profile is defined directly on the configuration map block. When we need to capture a value out of the configuration map to the resource, we define the path as environmentConfigurationMap.[environmentType].Resource defined in the configuration map. The code for the configuration map and VM block is provided below. Not to forget the parameter passed for the naming convention are still used upto to disk and NIC resource level.

param pcnamingConvention string
param tags string
param tagvalue object = {
  createdby:tags

}

param coreSubnetName string
param coreSubnetID string
param OSdiskname string
param datadiskname string
// param diskindex int
//param snetNamePrefix string
@description('Username for the Virtual Machine.')
param adminUsername string
@description('Password for the Virtual Machine.')
@minLength(12)
@secure()
param adminPassword string



@description('Location for all resources.')
param location string = resourceGroup().location

@description('Defining data disk resource')
resource myDisk 'Microsoft.Compute/disks@2022-07-02' =    {
  name: datadiskname
  location: location
  sku: {
    name: 'StandardSSD_LRS'
  }

  properties: {
    diskSizeGB: 20
    creationData: {
      createOption: 'Empty'
    }
  }
}

//This setups up the configuration map and shortens the code the VM block and avoid the use of multiple if and else at the resoure block
param environmentType string
var environmentConfigurationMap = {
  dev : {
    hardwareProfile: {
     vmSize: 'Standard_B2ms'
   }
   storageProfile: {
     imageReference: {
       publisher: 'MicrosoftWindowsServer'
       offer: 'WindowsServer'
       sku: '2022-Datacenter'
       version: 'latest'
   
     }
     osDisk: {
       osType: 'Windows'
       name: OSdiskname
       createOption: 'FromImage'
       caching: 'ReadWrite'
       writeAcceleratorEnabled: false
       managedDisk: {
         storageAccountType: 'StandardSSD_LRS'
       }
       deleteOption: 'Detach'
     }
   }
 
   osProfile: {
    computerName: pcnamingConvention
    adminUsername: adminUsername
    adminPassword: adminPassword
    windowsConfiguration: {
      provisionVMAgent: true
      enableAutomaticUpdates: true
      patchSettings: {
        patchMode: 'AutomaticByOS'
        assessmentMode: 'ImageDefault'
        enableHotpatching: false
      }
}
}
networkProfile: {
networkInterfaces: [
  {
    id: pcVmNic.id
    properties: {
      primary: true
    }
  }
  ]

}  
winRM: {
listeners: []
}
enableVMAgentPlatformUpdates: false
diagnosticsProfile: {
bootDiagnostics: {
  enabled: false
}
}
}

stg : {
      hardwareProfile: {
       vmSize: 'Standard_B2ms'
     }
     storageProfile: {
       imageReference: {
         publisher: 'MicrosoftWindowsServer'
         offer: 'WindowsServer'
         sku: '2022-Datacenter'
         version: 'latest'
     
       }
       osDisk: {
         osType: 'Windows'
         name: OSdiskname
         createOption: 'FromImage'
         caching: 'ReadWrite'
         writeAcceleratorEnabled: false
         managedDisk: {
           storageAccountType: 'StandardSSD_LRS'
         }
         deleteOption: 'Detach'
       }
     }
   
     osProfile: {
      computerName: pcnamingConvention
      adminUsername: adminUsername
      adminPassword: adminPassword
      windowsConfiguration: {
        provisionVMAgent: true
        enableAutomaticUpdates: true
        patchSettings: {
          patchMode: 'AutomaticByOS'
          assessmentMode: 'ImageDefault'
          enableHotpatching: false
        }
  }
}
 networkProfile: {
  networkInterfaces: [
    {
      id: pcVmNic.id
      properties: {
        primary: true
      }
    }
    ]

}  
winRM: {
  listeners: []
}
enableVMAgentPlatformUpdates: false
diagnosticsProfile: {
  bootDiagnostics: {
    enabled: false
  }
}
 }
 
    Prod : {
   
      hardwareProfile: {
        vmSize: 'Standard_D2s_v5'
      }
      storageProfile: {
        imageReference: {
          publisher: 'MicrosoftWindowsServer'
          offer: 'WindowsServer'
          sku: '2019-Datacenter'
          version: 'latest'
     
        }
        osDisk: {
          osType: 'Windows'
          name: OSdiskname
          createOption: 'FromImage'
          caching: 'ReadWrite'
          writeAcceleratorEnabled: false
          managedDisk: {
            storageAccountType: 'StandardSSD_LRS'
          }
          deleteOption: 'Detach'
        }
       dataDisks: [
          {
            createOption:  'attach'
            lun: substring(datadiskname, length(datadiskname)-1,1)
             diskSizeGB:20
             managedDisk: {
              id: myDisk.id

            }
          }
        ]
}
osProfile: {
  computerName: pcnamingConvention
  adminUsername: adminUsername
  adminPassword: adminPassword
  windowsConfiguration: {
    provisionVMAgent: true
    enableAutomaticUpdates: true
    patchSettings: {
      patchMode: 'AutomaticByOS'
      assessmentMode: 'ImageDefault'
      enableHotpatching: false
    }
  }
}
networkProfile: {
  networkInterfaces: [
    {
      id: pcVmNic.id
      properties: {
        primary: true
      }
    }
    ]

}  

winRM: {
  listeners: []
}
enableVMAgentPlatformUpdates: false

diagnosticsProfile: {
  bootDiagnostics: {
    enabled: true
  }
}
 }
    }
 

    @description('Defining NIC  resource')
    resource pcVmNic 'Microsoft.Network/networkInterfaces@2022-05-01' = {
      name: '${pcnamingConvention}nic'

       location: location
       properties: {
        ipConfigurations: [
          {
            name: 'ipConfigName'
            properties: {
              privateIPAllocationMethod: 'Dynamic'
              subnet: {
              name: coreSubnetName
                id: coreSubnetID
              }
            }
          }
        ]
      }
    }
 
output netinterface string = coreSubnetID
resource pcVirtualMachine 'Microsoft.Compute/virtualMachines@2022-11-01' = {
  name: pcnamingConvention
  location: location
   tags:tagvalue
  properties: {
       storageProfile:  environmentConfigurationMap[environmentType].storageProfile
       
        hardwareProfile: environmentConfigurationMap[environmentType].hardwareProfile
        networkProfile: environmentConfigurationMap[environmentType].networkProfile
        diagnosticsProfile: environmentConfigurationMap[environmentType].diagnosticsProfile
        osProfile: environmentConfigurationMap[environmentType].osProfile
       
      }
     

      }
 

 

 

Shared Service Resource

 

The SharedService module begins with an Azure solution array decorator for logging, which can be defined separately for a smaller main code module and used as output in the next revision. The load JSON and other parameters passed to this module are not captured since the naming convention follows in any resource and sub-resources. It is worth mentioning that the restricted naming convention is used for all LogAnalytics, Key Vault, and Recovery Service Vault. In addition, an allowed decorator and Log Analytics solution based on a co-worker's example have been used since the solution list now supports an array.

var enableSS = environment == 'prod'
param aaDate string = utcNow('mm-dd-yy')
var sharedNamePrefixes = loadJsonContent('./Parameters/AzPrefixes.json')
@allowed([
  'AzureActivity'
  'ChangeTracking'
  'Security'
  'SecurityInsights'
  'ServiceMap'
  'SQLAssesment'
  'AgentHealthAssistant'
  'AntiMalware'
])
@description('Solutions would be added to the log analytics workspace. - DEFAULT VALUE  AgentHealthAssistant,AntiMalware,AzureActivity,ChangeTracking,Security, SecurityInsights,ServiceMap,SQLAssesment,Updates,VMInsights')
param parPcLawSolutions array = [  
'AzureActivity'
'ChangeTracking'
'Security'
'SecurityInsights'
'ServiceMap'
//'AgentHealthAssistant'
//'AntiMalware'
//'SQLAssesment'
//'Updates'
//'VMInsights'
]
// param parPcLawSolutions array TEMP{test}

 

After the deploymentScript block, there is a code block for Managed Identity, which will be utilized by the script block. Although Microsoft suggests that newer bicep versions do not require Managed Identity and Role-Based Access Control (RBAC) for simple scripts, I could not get it to work without a Managed Identity and custom role. I found several articles describing similar issues, so I decided to try using Managed Identity with a custom role, and it worked perfectly.

Custom RBAC for Managed identity

 "properties": {
        "roleName": "MIResourceDeploy",
        "description": "",
        "assignableScopes": [
            "/subscriptions/d4a23241-7c83-4708-a2ce-c5c15fd80a35"
        ],
        "permissions": [
            {
                "actions": [
                    "*/read",
                    "Microsoft.Storage/storageAccounts/*",
                    "Microsoft.ContainerInstance/containerGroups/*",
                    "Microsoft.Resources/deployments/*",
                    "Microsoft.Resources/deploymentScripts/*"
                ],
                "notActions": [],
                "dataActions": [],
                "notDataActions": []
            }
        ]
    }
}

 

Resource block that defines Managed Identity.

resource userAssignedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-01-31' existing =  {
  name: deploymentMI
  scope: resourceGroup(deploymentsub,usmiRG)
}

output identityId string = userAssignedIdentity.id

 

Inline script with the output from Managed Identity –

module inlineScript './Shared/inlinebicep.bicep' = {
  name : 'runPowerShellInlineWithOutput'
   params: {
   resourceGroupName  : resourceGroupname
   resourceName : existingLawName
   location: location
   UserAssignedIdentity: userAssignedIdentity.id
   namesuffix : namesuffix
}
}
output resourceExists bool = inlineScript.outputs.resourceExists

 

Powershell Inline deployment Scripts -

The above inline script is used to call the resource block for PowerShellDeploymentScripts. I am attempting to evaluate two scripts to obtain a Boolean output for the OMS Workspace and Automation account linked resources. The PowerShell script makes use of Get-Azresource to determine if the respective resource already exists, and if so, assigns a Boolean true to the output variable. While this may not be the most effective use of an inline script, I am considering building one for future

deployments to create a pool of disks at the OS level in order to improve IO speed by utilizing multiple spanning disk RAID arrays for increased IOPS. This is a good example of a VM that requires a large data volume and can be improved by combining disk output. Other potential use cases include DSC for Domain controller and domain joining, but in this case, I can avoid the use of an inline script by utilizing DSC from the automation. There are many other potential use cases for inline scripts, such as shutting down services after 30 minutes of deployment, which could be useful for LAB purposes.

Here is the full code with the output –

pparam utcValue string = utcNow()

param location string
param resourceGroupName string
param resourceName  string
param UserAssignedIdentity string
param namesuffix string
var aaLinkedName = '${resourceName}a${namesuffix}'

resource runPowerShellInlineWithOutput 'Microsoft.Resources/deploymentScripts@2020-10-01' = {
  name: 'runPowerShellInlineWithOutput'
  location: location
  kind: 'AzurePowerShell'
  identity: {
    type: 'UserAssigned'
    userAssignedIdentities: {
      '${UserAssignedIdentity}': {}
    }
  }
  properties: {
    forceUpdateTag: utcValue
    azPowerShellVersion: '9.0'
        arguments:  '-resourceName ${resourceName} -resourceGroupName  ${resourceGroupName} -aaLinkedName ${aaLinkedName}'
    scriptContent: '''
   
    param(
      [string] $resourceName,
      [string] $ResourceGroupName,
      [string] $aaLinkedName
         )
         if (Get-AzResource -ResourceName $resourceName -ResourceType 'Microsoft.OperationalInsights/workspaces' -ResourceGroupName $ResourceGroupName -ErrorAction SilentlyContinue | Select-Object -First 1) {
          # Resource found
          $ResourceExists = $true
            }
          else {
              # Resource not found
              $ResourceExists = $false
       
     
     

         Write-Output $ResourceExists
       
         Write-Output $resourceName
           Write-Host $ResourceExists
           Write-Host  $resourceName
          }
         
          if (Get-AzResource -ResourceName $aaLinkedName -ResourceType 'Microsoft.OperationalInsights/workspaces/linkedServices' -ResourceGroupName $ResourceGroupName -ErrorAction SilentlyContinue | Select-Object -First 1) {
            # Resource found
            $linkedResourceExists = $true
              }
            else {
                # Resource not found
                $linkedResourceExists = $false
            }    
            Write-Output $linkedResourceExists
            Write-Output $aaLinkedName
     
    $DeploymentScriptOutputs = @{}
    $DeploymentScriptOutputs['Result'] = $ResourceExists
    $DeploymentScriptOutputs['aalResult'] = $linkedResourceExists
    $DeploymentScriptOutputs['nameOutput'] = $resourceName
    '''
   
    timeout: 'PT1H'
    cleanupPreference: 'OnSuccess'
    retentionInterval: 'PT2H'
  }
}

output resourceExists bool = runPowerShellInlineWithOutput.properties.outputs.Result
output resourcName string = runPowerShellInlineWithOutput.properties.outputs.nameOutput
output linkedResourceExists bool = runPowerShellInlineWithOutput.properties.outputs.aalResult

SharedServcices

Returning to the SharedServices module, I am utilizing an if statement to deploy SharedServices exclusively to the production environment. For this, I am utilizing the EnableSS variable which has a default value of Prod, but can be overwritten during deployment with the parameter file or deployment expression depending on the order of precedence. Here is the rest of the block for the SharedServices module after the deploymentScript block:

mmodule pcAutomation 'shared/pcAutomation.bicep' =  if (enableSS) {

  name: 'deployauto${aaDate}'
  params: {
    namingConvention: replace(restrictedNamingPlaceHolder, '[PC]',sharedNamePrefixes.parameters.automationAccountPrefix)
    location: location
    tags: tags
  }
}

module pcLaw 'Shared/pcLaw.bicep'= if (enableSS) {
  name: 'deploy-law-${restrictedNamingPlaceHolder}${sharedNamePrefixes.parameters.logAnalyticsWorkspacePrefix}${namesuffix}'
  params: {
    namingConvention: '${restrictedNamingPlaceHolder}${sharedNamePrefixes.parameters.logAnalyticsWorkspacePrefix}'
   parPcLawSolutions: parPcLawSolutions
    pcAutoId: pcAutomation.outputs.pcAutomationAccountId
   tag: tags
    location:location
    resourceExists  : inlineScript.outputs.resourceExists
    linkedResourceExists : inlineScript.outputs.linkedResourceExists

   }
  }

module keyVault 'Shared/pcKeyVault.bicep' = if (enableSS) {
  name: 'deploy-kv${restrictedNamingPlaceHolder}${sharedNamePrefixes.parameters.KeyVault}${namesuffix}'
  params: {
    location: location
    namingConvention: replace(restrictedNamingPlaceHolder, '[PC]',sharedNamePrefixes.parameters.KeyVault)
    tags: tags
   
    }
  }
module recovery 'Shared/pcRecoveryVault.bicep' = if (enableSS) {
  name: 'deploy-rsv${aaDate}${namesuffix}'
     params: {
    location: location
    tags: tags
    namingConvention: '${restrictedNamingPlaceHolder}${sharedNamePrefixes.parameters.RecoveryServicesvault}'
    namesuffix : namesuffix
  }
}


 

Log Analytics –

The shared Service module mainly consists of standard resources that are described in Microsoft's documentation, except for the Log Analytics resource block. This block utilizes an output derived from the PowerShell deployment script to initiate deployment. The Log Analytics workspace leverages idempotency to skip deployment if the resource already exists. Although if statements are not necessary with idempotency, I included them for display purposes. The Log Analytics solution block also checks if the array of solutions is not empty before deployment to ensure that it does not skip the deployment. To work around the limitation of not being able to use the same ResourceExist parameters twice in the same module, I created a construct that converts a Boolean expression to a string. I established a string variable condition to verify whether ExistorNot is equal to "new" to deploy the resource.

 resource pclaw 'Microsoft.OperationalInsights/workspaces@2022-10-01' =  if (!resourceExists) {

  properties: {
 //   source: 'Azure'
    sku: {
      name: 'pergb2018'
    }
    retentionInDays: 30
    features: {
      legacy: 0
      searchVersion: 1
      enableLogAccessUsingOnlyResourcePermissions: true
    }
    workspaceCapping: {
      dailyQuotaGb: json('-1.0')
    }
    publicNetworkAccessForIngestion: 'Enabled'
    publicNetworkAccessForQuery: 'Enabled'
  }
  name: 'laW${namingConvention}'
  location: location
  tags: tag
 
}

 

This is how we can build deployment to changed based on different environment. In my case, I have three ps1 file with different subscription and parameter for different apps and number of compute and storage. The same could be plugged in to YAML workflow on GitHub actions or DevOps pipeline.

# connect to the azure account and subscription       1ac
connect-azaccount -Tenant "xx-d29e-4"

# get Context and set the context for the deployment    9222056  
$context = get-azsubscription -SubscriptionId "xx-4708-a2c"
set-azcontext $context
##  "appRoleIndex": {
 # "saAccountCounts
# prcreaet main resource group if needed
## ### New-AzResourceGroup -name 'rrr' -location 'location' -force

# splat expresssion to assign parameters for deployment
$Parameters = @{
    Name = "deploymentName"+ (get-date).ToString("yyyyMMddHHmmss")
    TemplateFile = "main.bicep"
    TemplateParameterFile = "./modules/Parameters/parameters-infra-prod.json"
    env = "prod"
    saAccountCounts = 3
    appRoleIndex = 1
    vmCountIndex = 1
    }
New-AzSubscriptionDeployment @parameters -verbose -Location "westus2"

# ResourceGroupName = "projNeudBaseRg"

In the end, the script generates two deployments - one is based on the parameter provided through hash table splatting, and the other is for the deployment of the coreResourceGroup. To prevent the subscription from being exposed in a public blog, the output value is excluded.





Resource deployed in RG-demoRG-IT-prod-dcazw2-rg-main01, which is generated out of naming convention.


 Shared resource group resources which is also used for naming convention.


Link to Git repo - pchettri3/BicepProjFullWithoutPeeering: Bicep project full without peering (github.com)

....