I would like to start with an extremely simple recipe, Public Web App with Storage Account, so that I could try out how to document the recipe and structure the sample code. Here are the characteristics or matching criteria:
- Application utilizes web protocol, https, to communicate with end user or consuming application, such as MVC Web App, REST API, etc.
- Application is accessible from public internet without authentication and authorization
- Data is served from one or more publicly accessible Azure Storage Account
- Data sensitivity is very low, which has no impact to the organization
Here are the areas I would like to demonstrate in this introductory recipe:
- Both Web App and Storage Account can be accessed from public internet
- Enable RBAC in Storage Account with Managed Identity (MS Doc link)
- Create Bicep module to improve reusability
- Demonstrate CI/CD automation with Azure DevOps using Bicep and YAML
Source code can be found in GitHub (link).
Application Example
I am using ‘Office Hours API’ as a web app, which is implemented as a REST API with Storage Table as backend. This web app provides information about office hours for an organization’s office as well as opening hours for their retails store. Below diagram shows the deployment model into Azure Cloud:

Walkthrough
DevOps Code Structure
Let’s go into a bit more details about how the DevOps code structure:

The bicep-modules folder contains bicep modules which can be reused across different projects or products, it should be your goal to build up your library of modules. Unless you have a dedicated team to build and maintain these modules, ‘copy-&-paste’ is good enough for reuse.
The yaml-templates under devops-pipelines folder is the reusable yaml code across different environments, but most likely they cannot be reused across different products, unless they have the same deployment model. My usual approach is to copy them from a similar recipe or previous project, then update them accordingly.
As described in the CI/CD automation in 60 minutes, I use four pipelines for CI/CD automation plus a PowerShell script, create.ps1, to create the pipelines accordingly. Don’t forget to change the pipeline variables accordingly:
variables:
serviceConn: '<your-ado-service-connection>'
resourceGroupName: '<your-resource-group-without-env-suffix>'
appServiceName: '<your-resource-group-without-env-suffix>'
At last, the resource-groups folder has all the resource groups required for your product. In this recipe, there is only one, rg-rdn-azurerecipes, which holds the main.bicep with parameter file per environment to ‘describe’ the deployment model. The test.ps1 is just a handy script to test the deployment against main.bicep within Visual Studio Code.

Inside the main.bicep, I like to have variable section for generating names so that I can easily locate them and change them, which could be a personal preference:
// Generate Azure Service name for different environment
var appName = '<your-app-name>'
var orgAbbr = '<your-org-abbreviations>'
var planName = 'plan-${orgAbbr}-${appName}-${environmentName}'
var webAppName = 'app-${orgAbbr}-${appName}-${environmentName}'
var storageAcctName = 'st${orgAbbr}${appName}${environmentName}'
Bicep: Performing Lookup
In Bicep, you could create a JSON object to store a set of values and identify them with a lookup key. In our example, the framework configuration is looked up through a simplified key, e.g. java, to an actual configuration value, e.g. TOMCAT|10.0-java17. You could store the JSON object in a variable, then use lookup to set the value:
// Setup the lookup
var fxConfigure = {
dotnet: {
fxVersion: 'DOTNETCORE|6'
}
python: {
fxVersion: 'PYTHON|3.9'
}
node: {
fxVersion: 'NODE|16-lts'
}
java: {
fxVersion: 'TOMCAT|8.5-java11'
}
}
// Use the lookup
var linuxFx = fxConfigure[langEngine].fxVersion
Storage Account: Role Assignment
When using Managed Identity, we need to grant the proper role(s) on the Storage Account to the Web App Managed Identity, in our example, it is the Table Data Contributor role. I created a Bicep module for granting certain roles, this module should be good enough for most use cases, but you may need to add additional roles to suit your needs by finding out the proper Role GUID in Built-in-Role document (MS Doc link):
@allowed([
'TableDataContributor'
'TableDataReader'
'BlobDataContributor'
'BlobDataReader'
'BlobDataOwner'
])
@description('Predefined role to be assigned')
param roleName string
// Found out the role guid at: https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles
var roleGuidLookup = {
TableDataContributor: {
roleDefinitionId: '0a9a7e1f-b9d0-4cc4-a60d-0319b160aaa3'
}
TableDataReader: {
roleDefinitionId: '76199698-9eea-4c19-bc75-cec21354c6b6'
}
BlobDataContributor: {
roleDefinitionId: 'ba92f5b4-2d11-453d-a403-e96b0029c9fe'
}
BlobDataReader: {
roleDefinitionId: '2a2b9908-6ea1-4ae2-8e65-a410df84e7d1'
}
BlobDataOwner: {
roleDefinitionId: 'b7e6dc6d-f1e8-4753-8033-0f276bb0955b'
}
}
App Service: linuxFxVersion
The language runtime engine in Linux App Service is configured through linuxFxVersion; however, there is no official documentation for possible values, or at least, I can’t find it. Instead of searching for it, I usually get the value by manually configure it then use ‘Export template’ in Azure Portal. Here are a few values:
Runtime | linuxFxVersion |
.NET Core 3.1 | DOTNETCORE|3.1 |
.NET 6 | DOTNETCORE|6 |
Python 3.7 | PYTHON|3.7 |
Python 3.9 | PYTHON|3.9 |
Node.js 16 | NODE|16-lts |
Node.js 14 | NODE|14-lts |
Java 17 with Tomcat 10 | TOMCAT|10.0-java17 |
Java 11 with Tomcat 8 | TOMCAT|8.5-java11 |
Java 8 with SE | JAVA|8-jre8 |
Java 11 with SE | JAVA|11-java11 |
PHP 8.0 | PHP|8.0 |
Ruby 2.7 | RUBY|2.7 |
Interesting found: Storage Account IP Rules
It is possible to use IP rules in Storage Account to limit network access to the storage account. In our example, we should deny all public accesses from internet and allow access only from App Service outbound IP addresses. Unfortunately, it is not possible by just use IP rules since both app service and storage account are in the same region. As stated in Microsoft documentation ‘Services deployed in the same region as the storage account use private Azure IP addresses for communication’*.
* See Configure Azure Storage firewalls and virtual networks (MS Doc link)
Closing out
I hope you find this introductory recipe interesting and even useful. I didn’t explain every detail because I assume you already has the basic knowledge of the technologies used here. Anyway, please let me know if you have any suggestion to improve documenting the recipe or structuring the sample code.