Sitecore Analytics Errors

ERROR [Experience Analytics]: System.Net.WebException: The remote name could not be resolved: 'reportingserviceurl'
   at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
   at System.Net.HttpWebRequest.GetRequestStream()
   at Sitecore.Xdb.Reporting.Datasources.Remote.RemoteReportDataSourceProxy.GetData(ReportDataQuery query)
   at Sitecore.Xdb.Reporting.ReportDataProvider.ExecuteQueryWithCache(ReportDataQuery query, ReportDataSource dataSource, CachingPolicy cachingPolicy)
   at Sitecore.Xdb.Reporting.ReportDataProvider.GetData(String dataSourceName, ReportDataQuery query, CachingPolicy cachingPolicy)
   at Sitecore.ExperienceAnalytics.Core.Repositories.SiteRemoteReader.GetEntities(String sqlQuery)
   at Sitecore.ExperienceAnalytics.Core.Repositories.SiteRemoteReader.GetAll(NameValueCollection readingPreferences)
   at Sitecore.ExperienceAnalytics.Core.Repositories.CachedReaderDecorator`2.GetAll(NameValueCollection readingPreferences)
   at Sitecore.ExperienceAnalytics.Core.Repositories.SiteFilter.FilterReaderDecorator`2.GetAll(NameValueCollection readingPreferences)
   at Sitecore.ExperienceAnalytics.Client.RenderingHelper.GetSiteComboBoxItems()

if you’re getting this error message, it’s likely that your configurations are missing the URL to the reporting service.

On the CM server modify the configuration file at:
\wwwroot\App_Config\Sitecore\Azure\Sitecore.Xdb.Remote.Client.CM.config

notice that there are 2x spots for URLs. If those locations have a dummy placeholder URLs then something went awry with the original setup. Instead replace the placeholder urls with your rep and prc service urls.

Azure Search Missing Target Dropdown

Missing options in the target dropdown for the general link’s internal link form? The options are sourced by the search index for some reason.

check if a simple reindex of your core index will do the trick

If you’ve already tried that and still no dice, you may run into the same issue i did. After going to Sitecore Support i got a few good pieces of information.

  1. In order to use Azure search in Sitecore you need to limit the fields indexed by Sitecore. Typically done with <indexAllFields>false</indexAllFields>
  2. There are some fields required by SPEAK to make these forms work properly

The Solution

There are a few templates and fields that need to be available for this functionality to work properly. Make sure your solution has these standard configuration nodes set up.

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:search="http://www.sitecore.net/xmlconfig/search/">
  <sitecore role:require="ContentManagement or ContentDelivery" search:require="azure">
    <contentSearch>
      <indexConfigurations>
        <defaultCloudIndexConfiguration>
          <documentOptions>
            <include hint="list:AddIncludedTemplate">
              <StandardTemplate>{1930BBEB-7805-471A-A3BE-4858AC7CF696}</StandardTemplate>
              <CommonText>{76F63DF7-0235-4164-86AB-84B5EC48CB2A}</CommonText>
            </include>
            <include hint="list:AddIncludedField">
              <fieldId>{8CDC337E-A112-42FB-BBB4-4143751E123F}</fieldId>
              <hidden>{39C4902E-9960-4469-AEEF-E878E9C8218F}</hidden>
            </include>
          </documentOptions>
        </defaultCloudIndexConfiguration>
      </indexConfigurations>
    </contentSearch>
  </sitecore>
</configuration>

Azure Search replication

If you’re trying to get a geo-replicated disaster recovery site set up and you’re using Azure Search you likely ran into the same issue that i did. Azure Search simply does not have the geo-replication tools or abilities that SQL does. This becomes all the more frustrating by the fact that it’s literally the only PAAS element in the Sitecore ecosystem that doesn’t have this functionality. If you don’t have the luxury of being able to re-index your data rapidly, you’re stuck waiting for the data to index. In the context of Sitecore this can take several hours on particularly large sites.

Additionally this can be problematic when dealing with Blue/Green deployments as customer facing content could and should be included in your search index. This problem can be solved in a similar fashion. When added to this method of zero downtime deployments it can give a more complete and safe deployment.

Using a Azure Search Index as a source

Any data processing you needed to do to populate your primary index can be skipped if you simply utilize one main azure search index as the source for the second. I have this brokered through an Azure Function.
AzureFunctionReplicationFlow

The code for the function is as follows:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;


namespace BendingSitecore.Function
{
    public static class AzureSearchReplicate
    {
        [FunctionName("AzureSearchReplicate")]
        public static async Task Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
            ILogger log)
        {
	        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            dynamic data = JsonConvert.DeserializeObject(requestBody);

			IEnumerable indexes = Enumerable.Empty();
			if (data.indexes != null){
				indexes = ((JArray)data.indexes).Select(x => (string)x);
			} 
	        try
	        {
		        Start(new SearchServiceClient(data.source.ToString(), new SearchCredentials(data.sourceKey.ToString()))
			        , new SearchServiceClient(data.destination.ToString(), new SearchCredentials(data.destinationKey.ToString())),false, log, indexes ?? Enumerable.Empty());
			}
	        catch (Exception e)
	        {
				log.LogError(null, e, "An Error occurred");
		        return new BadRequestObjectResult("Require a json object with source, destination and keys.");
			}

	        return  new OkObjectResult($"Azure Search replication is running, should be finished in about 10 minutes.");
        }
		public static void Start(SearchServiceClient source, SearchServiceClient destination, bool wait,
            ILogger log, IEnumerable indexes)
		{
			List tasks = new List();
			ClearAllIndexes(destination, indexes);
			foreach (var index in source.Indexes.List().Indexes.Where(x => !indexes.Any() || indexes.Any(i => i.StartsWith(x.Name))))
			{
				tasks.Add(Task.Run(async () =>
				{
					try
					{
						destination.Indexes.Get(index.Name);
					}
					catch (Exception e)
					{
						log.LogInformation($"creating index {index.Name}", null);
						destination.Indexes.Create(index);
						await Task.Delay(5000);
					}
					await MigrateData(source.Indexes.GetClient(index.Name),
						destination.Indexes.GetClient(index.Name), log);
				}));
			}
			if (wait)
			{
				foreach (var task in tasks)
				{
					task.Wait();
				}
			}
		}

		public static void ClearAllIndexes(SearchServiceClient client, IEnumerable indexes)
		{
			foreach (var index in client.Indexes.List().Indexes.Where(x => !indexes.Any() || indexes.Any(i => i.StartsWith(x.Name))))
			{
				client.Indexes.Delete(index.Name);
			}
		}

		public static async Task MigrateData(ISearchIndexClient source, ISearchIndexClient destination,
            ILogger log)
		{
			log.LogInformation($"Starting migration of data for {source.IndexName}", null);
			SearchContinuationToken token = null;
			var searchParameters = new SearchParameters { Top = int.MaxValue };
			int retryCount = 0;
			while (true)
			{
				DocumentSearchResult results;
				if (token == null)
				{
					results = await source.Documents.SearchAsync("*", searchParameters);
				}
				else
				{
					results = await source.Documents.ContinueSearchAsync(token);
				}
				try
				{
					await destination.Documents.IndexAsync(IndexBatch.New(GetAction(destination, results)));
				}
				catch (Exception e)
				{
					log.LogError(e, "Error occurred writing to destination", null);
					log.LogInformation("Retrying...", null);
					retryCount++;
					if (retryCount > 10){
						log.LogError("Giving up...", null);
						break;
					}
					continue;
				}
				if (results.ContinuationToken != null)
				{
					token = results.ContinuationToken;
					continue;
				}

				break;
			}
			log.LogInformation($"Finished migration data for {source.IndexName}", null);
		}

		public static IEnumerable<IndexAction> GetAction(ISearchIndexClient client, DocumentSearchResult documents)
		{
			return documents.Results.Select(doc => IndexAction.MergeOrUpload(doc.Document));
		}
    }
}

Additionally make sure your Azure function has these configuration settings.


        AzureWebJobDashboard                     = "DefaultEndpointsProtocol=https;AccountName=$storageName;AccountKey=$accountKey"
        AzureWebJobsStorage                      = "DefaultEndpointsProtocol=https;AccountName=$storageName;AccountKey=$accountKey"
        FUNCTIONS_EXTENSION_VERSION              = "~2"
        FUNCTIONS_WORKER_RUNTIME                 = "dotnet"
        WEBSITE_NODE_DEFAULT_VERSION             = "8.11.1"
        WEBSITE_RUN_FROM_PACKAGE                 = "1"
        WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "DefaultEndpointsProtocol=https;AccountName=$storageName;AccountKey=$accountKey"
        WEBSITE_CONTENTSHARE                     = "$storageName"
        AzureWebJobsSecretStorageType            = "Files"

Running your function

Execute the function code using a raw json request body like so:

{
    "destination":  "[standby azure search name]",
    "destinationKey":  "[standby azure search key]",
    "source":  "[primary azure search name]",    
    "sourceKey":  "[primary azure search key]",
    "indexes":  null
}

Note: If you want to specify a particular index to manage you may pass in a json array of indexes to manage. It will only clear/refresh the indexes specified or if null it will clear/refresh all indexes

Automating

Through powershell there are ways to create and execute the Azure function given a valid Azure context and some desired names and resource groups. Expect to see a blog post on that shortly in the future.

Dude, Where’re my logs? (Azure)

If you’re new to the world of Sitecore in Azure PaaS then there’s a good chance that you popped open kudu and browsed to the App_Data/Logs folder and said to yourself “oh yeah, it’s in Application Insights or something…”. Then after going to Application insights and pushing buttons haphazardly arrived at something that kind of looks like log. It can be confusing and concerning to feel like you have an inability to debug problem. I’m going to go over the various ways of retrieving debug information for your Sitecore App Services.

Application Insights

This is where the vast majority of your logs are going to be, it’s not a great format and leaves me wanting more from the tool, but here’s how you use it:

  1. Navigate to your sites Application insights Azure resource
  2. In the Overview tab select the Analytics buttonAIAnalytics
  3. Under the table traces execute a query
  4. traces
    | where customDimensions.Role == “CM” and severityLevel == 3
    AIQuery
  5. The results will not be ordered properly, make sure you click the column header for timestamp to order by date
  6. Application insights has some handy auto-complete features to help you build a custom query to get exactly the data you’re looking for

NOTEWhile Application insights provides a good way to track and query log data, there does seem to be particular cases where the application does not properly submit log data to Application Insights. This leads us to the next Method.

Log Streaming

A more root level logging solution is the log streaming option offered by the App Service. This can provide a more reliable but less pleasant source of logs, this is good if you have an easily reproducible scenario. This will provide appropriate data in a more traditional format that many Sitecore developers will be more comfortable with. This option can give you more accurate and complete logging. It is important to note however that the logs get placed on the filesystem, so that will effect your filesystem size.

  1. Open the Diagnostics logs tab and turn on all the streaming logs settings.DiagnosticsLogs.png
  2. In the log stream you will now see logs coming in at real timeLogStream

Application logs

Some IIS level events and errors will find their way into the underlying filesystem, you can use Kudu to access them.

  1. First you need to access Kudukudu.png
  2. Using either the cmd or powershell Debug console navigate to D:\home\LogFiles and open eventlog.xmleventlog
  3. Here you will find IIS events and errors that may uncover more catastrophic errors that fail to be recorded in Application Insights

Azure App Service Logging

Sometimes despite all other options the problem persists, this is when we must view Azure health as on occasion without notification Azure events will impact our environments negatively.

  1. On the App Service select the Diagnose and solve problems tabDiagnoseAndSolve
  2. There are several reports in this interface that are definately worth an in depth look. I’ll focus on the Web App Restarted report.
    If you find that your app pool seems to be recycling too often, this is probably where you need to look.webappRestarted
  3. This report will give you any reason that Azure will have restarted your App Service

Remotely triggering Sitecore Operations in Azure

Sometimes you want your build/release system to execute some Sitecore process.  I’ve found that this method works pretty well.  To date I’ve used this same model to:

  1. Sitecore Publish
  2. Index rebuild
  3. Package install

Note: this is an adaptation of the strategy pioneered in this blog post

There are three separate concerns in this technique

  1. Web service to perform the operation
  2. Kudu for dynamic service installation
  3. Powershell to execute the operation

Step 1 – create the service

This code will run a publish


using System.Collections.Generic;
using System.Linq;
using System.Web.Services;
using Sitecore.Jobs;


[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[System.ComponentModel.ToolboxItem(false)]
public class PublishManager : System.Web.Services.WebService
{
	[WebMethod(Description = "Publishes all content")]
	public bool PublishAll(string token)
	{
		if (string.IsNullOrEmpty(token))
			return false;
		if (token != "[TOKEN]")
			return false;
		var db = Sitecore.Configuration.Factory.GetDatabase("master");
		var item = db.GetRootItem();
		var publishingTargets = Sitecore.Publishing.PublishManager.GetPublishingTargets(item.Database);

		foreach (var publishingTarget in publishingTargets)
		{
			var targetDatabaseName = publishingTarget["Target database"];
			if (string.IsNullOrEmpty(targetDatabaseName))
				continue;

			var targetDatabase = Sitecore.Configuration.Factory.GetDatabase(targetDatabaseName);
			if (targetDatabase == null)
				continue;

			var publishOptions = new Sitecore.Publishing.PublishOptions(
				item.Database,
				targetDatabase,
				Sitecore.Publishing.PublishMode.Smart,
				item.Language,
				System.DateTime.Now);

			var publisher = new Sitecore.Publishing.Publisher(publishOptions);
			publisher.Options.RootItem = item;
			publisher.Options.Deep = true;
			publisher.PublishAsync();
		}
		return true;
	}
	[WebMethod(Description = "Checks publish status")]
	public string[] PublishStatus()
	{
		return JobManager.GetJobs().Where(x => !x.IsDone && x.Name.StartsWith("Publish")).Select(x =>
			x.Status.Processed + " -> " + x.Name).ToArray();
	}
}

This asmx service has two methods. The first method initiates the publish to all publishing targets using the root Sitecore item as starting point. The second method checks the status. It’s important to do it this way because Azure has an shortish forced timeout that’s something like 3-5 minutes, which a publish can easilly surpass. To avoid this, we trigger the publish asynchronously then use the next method to check the status until the publish is completed.

Step 2 – Kudu powershell scripts

Note: This powershell code requires that you have an authenticated azure session to the appropriate Azure subscription.

function Get-AzureRmWebAppPublishingCredentials($resourceGroupName, $webAppName, $slotName = $null){
	if ([string]::IsNullOrWhiteSpace($slotName) -or $slotName.ToLower() -eq "production"){
		$resourceType = "Microsoft.Web/sites/config"
		$resourceName = "$webAppName/publishingcredentials"
	}
	else{
		$resourceType = "Microsoft.Web/sites/slots/config"
		$resourceName = "$webAppName/$slotName/publishingcredentials"
	}
	$publishingCredentials = Invoke-AzureRmResourceAction -ResourceGroupName $resourceGroupName -ResourceType $resourceType -ResourceName $resourceName -Action list -ApiVersion 2015-08-01 -Force
    	return $publishingCredentials
}

function Get-KuduApiAuthorisationHeaderValue($resourceGroupName, $webAppName, $slotName = $null){
    $publishingCredentials = Get-AzureRmWebAppPublishingCredentials $resourceGroupName $webAppName $slotName
    $ret = @{}
    $ret.header = ("Basic {0}" -f [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $publishingCredentials.Properties.PublishingUserName, $publishingCredentials.Properties.PublishingPassword))))
    $ret.url = $publishingCredentials.Properties.scmUri
    return $ret
}

function Get-FileFromWebApp($resourceGroupName, $webAppName, $slotName = "", $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"

    Write-Host " Downloading File from WebApp. Source: '$kuduApiUrl'." -ForegroundColor DarkGray
    $tmpPath = "$($env:TEMP)\$([guid]::NewGuid()).xml"
    $null = Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method GET `
                        -ContentType "multipart/form-data" `
                        -OutFile $tmpPath
    $ret = Get-Content $tmpPath | Out-String
    Remove-Item $tmpPath -Force
    return $ret
}

function Write-FileToWebApp($resourceGroupName, $webAppName, $slotName = "", $fileContent, $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"

    Write-Host " Writing File to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray

    Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method Put `
                        -ContentType "multipart/form-data"`
                        -Body $fileContent
}
function Write-FileFromPathToWebApp($resourceGroupName, $webAppName, $slotName = "", $filePath, $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"

    Write-Host " Writing File to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray

    Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method Put `
                        -ContentType "multipart/form-data"`
                        -InFile $filePath
}

function Write-ZipToWebApp($resourceGroupName, $webAppName, $slotName = "", $zipFile, $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/zip/site/wwwroot/$kuduPath"

    Write-Host " Writing Zip to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray

    Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method Put `
                        -ContentType "multipart/form-data"`
                        -InFile $zipFile
}
function Remove-FileFromWebApp($resourceGroupName, $webAppName, $slotName = "", $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"

    Write-Host " Writing File to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray

    Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method Delete `
                        -ContentType "multipart/form-data"
}

This is a collection of kudu utilities to manage files on an app service, which is important for what we’re going to do next.

Step 3 – Manage the publish

param(
	[Parameter(Mandatory=$true)]
    [string]$ResourceGroupName,
    [Parameter(Mandatory=$true)]
    [string]$AppServiceName
)
. "$PSScriptRoot\Get-KuduUtility.ps1"


$folderKey = -join ((97..122) | Get-Random -Count 10 | ForEach-Object {[char]$_})
$accessKey = -join ((97..122) | Get-Random -Count 10 | ForEach-Object {[char]$_})
try{
    (Get-Content "$PSScriptRoot\PublishManager.asmx").Replace("[TOKEN]", $accessKey) | Set-Content "$PSScriptRoot\tmp.asmx"
	Write-FileFromPathToWebApp -resourceGroupName $ResourceGroupName -webAppName $AppServiceName -slotName "" -filePath "$PSScriptRoot\tmp.asmx" -kuduPath "PublishManager/$folderKey/PublishManager.asmx"
	Remove-Item "$PSScriptRoot\tmp.asmx" -Force
	$site = Get-AzureRmWebApp -ResourceGroupName $ResourceGroupName -Name $AppServiceName
	$webURI= "https://$($site.HostNames | Select-Object -Last 1)/PublishManager/$folderKey/PublishManager.asmx?WSDL"
    try{
        $null = Invoke-WebRequest -Uri $webURI -UseBasicParsing
    }catch{
        $null = Invoke-WebRequest -Uri $webURI -UseBasicParsing 
    }
	$proxy = New-WebServiceProxy -uri $webURI
    $proxy.Timeout = 1800000
    $ready = $proxy.PublishAll($accessKey)

	if (-not $ready){
		throw "Unable to publish, check server logs for details."
	}
    Write-Host "Starting publish process and scanning for progress."
	for ($i = 0; $i -lt 180; $i++) {
		$done = $true
		$proxy.PublishStatus() | ForEach-Object {
			$done = $false
			write-host $_
		}
		write-host "***********  $($i * 20) Seconds **********"
		if ($done){
            Write-Host "Publish Completed."
			break
		}
		Start-Sleep -Seconds 20
		if ($i -eq 179){
			write-host "Sitecore Publish Timeout."
		}
	}
}finally{
	Write-Host "Removing Sitecore Publish service"
	Remove-FileFromWebApp -resourceGroupName $ResourceGroupName -webAppName $AppServiceName -slotName "" -kuduPath "PublishManager/$folderKey/PublishManager.asmx"
	Remove-FileFromWebApp -resourceGroupName $ResourceGroupName -webAppName $AppServiceName -slotName "" -kuduPath "PublishManager/$folderKey"
}

There’re a few important components to this script:

  1. The script generates a couple guids, first obfuscates the service path, the second is used as a security key that must be passed into the service. The service is modified to write this security key into the asmx file prior to uploading to Azure.
  2. Uploads the modified asmx file to be dynamically compiled and utilized
  3. Executes the initialize publish service method
  4. Calls the status method every 20 seconds and outputs the current status
  5. Removal of the service for security purposes

Sitecore Sidekick 1.5 – The Need for Speed

A huge thank you to Michael West for sparking many of these ideas.

The Release Candidate for Sidekick 1.5 is now available.  Bringing with it many improvements.

Importantly this includes a fix that lets Sitecore Sidekick function properly with the latest version of Rainbow (used by the latest version of Unicorn).

We can make it go FASTER

speedy.gif

Introducing Data Blaster integration.

Data blaster is a tool for Sitecore that manages bulk item creation in a different way. In a nutshell it stages all the items into a temp table and merges them all in at the same time dramatically reducing item creation time.

The results are incredible.  Below is a test of 4095 item creates.

scStats

However Data Blaster is for bulk inserts primarily and starts to become less effective for updates.  This is the same test of 4095, but with updates rather than inserts.  As you can see the benefit of Data blaster drops off and Content Migrator shines.  Also noteworthy is that SC packages seem to be much worse at updates.  This is why Content Migrator only uses Data Blaster when items need to be created, and not updated.

scUpdateStats.png

With the introduction of Data blaster there is a new advanced option.  If for some reason you don’t want to use it.

newAdvancedOptions

Other speed improvements

Rev id for initial comparison

Using the rev id for comparisons.  previously content would be brought down from the remote server at which point the system decided if it needed to install it or skip it.  However now it sends the content revision id along with the request, if the revision ids match the items are equivalent (the vast majority of the time, hence the ignore rev id option in the image above).  This especially makes media syncs faster if there are media files that should be skipped they aren’t even brought over the network.

Larger content packages

Content migrator now batches item requests together.  Parents and children are combined (unless media items, which still go one at a time).  The result is a dramatic decrease in requests made to the server and a resulting performance increase.

Notable bug fixes

Diff Generation

There was a pesky bug in the generation of the diff that would result in some unusual occurrences that would most likely lead to false positives for changes.  This has been fixed along with smarter diff generation that will result in less processing time.  It still retains it’s immediately available diff functionality.

Easy Sitecore 9 Azure PAAS no downtime deployments

Disclaimer: It’s not really easy, just easier than alternatives.

This is also known as Blue/Green deployments

Having your site stay up during downtime is a very common ask for any website.  Sitecore comes with it’s own set of challenges, however with a few simple tips and tricks in Azure you can get a very robust solution for a minimal effort.

Attributes of this approach

  • There is downtime for authors in the CM environment.
  • There is no content authoring freeze. (however while a deployment is going on there is a publishing freeze, mitigated by an optional search index swap covered later)
  • Azure assets are created on demand so there is no offline environment hanging out doing nothing but costing money.
  • Orchestrated by powershell
  • The Issues

    Primary Issue

    A deploy is a 2 step process.  You need to publish new templates, renderings, or other developer owned Sitecore items to the content delivery database and you need to deploy the code that knows how to work those new templates. No matter how hard you try, you can’t do these things at the same time perfectly. This leaves the possibility of end users seeing server errors.

    Secondary Issues

    There are two secondary issues that are optional which will be discussed later Search index and Xconnect. These are secondary because they lead to some potentially annoying results, but not likely a server error.

    Solving these problems

    I’m going to focus on solving the primary problem for simplicity. Note that the diagram below has steps for Search Index replication. For simplicity in this blog i’ll focus on blue/green without search index handling.

    Sitecore 9 blue-green model.png

    To accomplish many of these tasks we’ll heavily be utilizing Kudu. Basically it’s a rest api suite for Azure app services.

    Process outline

    How will this process effect specific groups?

    I’m an author

    1. There will be brief downtime in the content management environment
    2. Content management will come back up with the new code and templates
    3. Content editing is allowed
    4. Publishing will send changes to the staging slot content delivery URL (NOTE: if you’re not duplicating a search index, this could impact your end users if your components are sourced by the search index)
    5. Once blue/green completes the changes that you published will be end user facing
    6. Business continues as usual

    I’m a dev-ops professional

    1. Authors have been warned about a brief downtime in CM
    2. Blue/green process is kicked off
    3. CM and CD deployments are done
    4. Alert testers and wait for testing to complete
    5. On successful test, swap staging slots to production, on failure wait for hotfix and deploy to the environments again, on catastrophic failure initialize rollback
    6. If Success, initialize finalize to clean up unused offline environments

    I’m a tester

    1. Get alerted by dev-ops team that the deployment to the staging slot is complete
    2. Break it
    3. Alert development team an emergency hotfix is needed
    4. Wait for dev-ops team to report the hotfix has been deployed
    5. Test again, no breaking this time
    6. Report to dev-ops team all is well

    The Powershell

    NOTE: These powershell functions all require the powershell context to have an authenticated Azure connection to perform it’s tasks. I recommend using a service principal for this.

    Utility functions

    This set of utility functions is mostly for file I/O with the Azure App Services. Saved in a file called “Get-KuduUtility.ps1”

    function Get-AzureRmWebAppPublishingCredentials($resourceGroupName, $webAppName, $slotName = $null){
    	if ([string]::IsNullOrWhiteSpace($slotName) -or $slotName.ToLower() -eq "production"){
    		$resourceType = "Microsoft.Web/sites/config"
    		$resourceName = "$webAppName/publishingcredentials"
    	}
    	else{
    		$resourceType = "Microsoft.Web/sites/slots/config"
    		$resourceName = "$webAppName/$slotName/publishingcredentials"
    	}
    	$publishingCredentials = Invoke-AzureRmResourceAction -ResourceGroupName $resourceGroupName -ResourceType $resourceType -ResourceName $resourceName -Action list -ApiVersion 2015-08-01 -Force
        	return $publishingCredentials
    }
    
    function Get-KuduApiAuthorisationHeaderValue($resourceGroupName, $webAppName, $slotName = $null){
        $publishingCredentials = Get-AzureRmWebAppPublishingCredentials $resourceGroupName $webAppName $slotName
        $ret = @{}
        $ret.header = ("Basic {0}" -f [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $publishingCredentials.Properties.PublishingUserName, $publishingCredentials.Properties.PublishingPassword))))
        $ret.url = $publishingCredentials.Properties.scmUri
        return $ret
    }
    
    function Get-FileFromWebApp($resourceGroupName, $webAppName, $slotName = string::Empty, $kuduPath){
        $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
        $kuduApiAuthorisationToken = $KuduAuth.header
        $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"
    
        Write-Host " Downloading File from WebApp. Source: '$kuduApiUrl'." -ForegroundColor DarkGray
        $tmpPath = "$($env:TEMP)\$([guid]::NewGuid()).xml"
        $null = Invoke-RestMethod -Uri $kuduApiUrl `
                            -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                            -Method GET `
                            -ContentType "multipart/form-data" `
                            -OutFile $tmpPath
        $ret = Get-Content $tmpPath | Out-String
        Remove-Item $tmpPath -Force
        return $ret
    }
    
    function Write-FileToWebApp($resourceGroupName, $webAppName, $slotName = string::Empty, $fileContent, $kuduPath){
        $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
        $kuduApiAuthorisationToken = $KuduAuth.header
        $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"
    
        Write-Host " Writing File to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray
    
        Invoke-RestMethod -Uri $kuduApiUrl `
                            -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                            -Method Put `
                            -ContentType "multipart/form-data"`
                            -Body $fileContent
    }
    
    function Write-ZipToWebApp($resourceGroupName, $webAppName, $slotName = string::Empty, $zipFile, $kuduPath){
        $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
        $kuduApiAuthorisationToken = $KuduAuth.header
        $kuduApiUrl = $KuduAuth.url + "/api/zip/site/wwwroot/$kuduPath"
    
        Write-Host " Writing Zip to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray
    
        Invoke-RestMethod -Uri $kuduApiUrl `
                            -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                            -Method Put `
                            -ContentType "multipart/form-data"`
                            -InFile $zipFile
    }
    
    function Copy-AppServiceToStaging($resourceGroupName, $webAppName){
        $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName
        $kuduApiAuthorisationToken = $KuduAuth.header
        $KuduStagingAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName "Staging"
        $kuduStagingApiAuthorisationToken = $KuduStagingAuth.header
    #NOTE: you must copy all paths of the webroot that aren't involved in your deployment
    #For example if you also wanted to copy the Sitecore folder you could change this to:
    # @("App_Config", "App_Data", "Sitecore")
        @("App_Config", "App_Data") | ForEach-Object {
            $kuduConfigApiUrl = $KuduAuth.url + "/api/zip/site/wwwroot/$_/"
            $tmpPath = "$($env:TEMP)\$([guid]::NewGuid()).zip"
            try{
                $WebClient = New-Object System.Net.WebClient
                $WebClient.Headers.Add("Authorization", $kuduApiAuthorisationToken)
                $WebClient.Headers.Add("ContentType", "multipart/form-data")
    
                $WebClient.DownloadFile($kuduConfigApiUrl, $tmpPath)
    
                $kuduConfigApiUrl = $KuduStagingAuth.url + "/api/zip/site/wwwroot/$_/"
                $kuduApiFolderUrl = $KuduStagingAuth.url + "/api/vfs/site/wwwroot/$_/"
                Invoke-RestMethod -Uri $kuduApiFolderUrl `
                    -Headers @{"Authorization"=$kuduStagingApiAuthorisationToken;"If-Match"="*"} `
                    -Method PUT `
                    -ContentType "multipart/form-data"
                #need a sleep due to a race condition if this folder is utilized too quickly after creating
                Start-Sleep -Seconds 2
                Invoke-RestMethod -Uri $kuduConfigApiUrl `
                    -Headers @{"Authorization"=$kuduStagingApiAuthorisationToken;"If-Match"="*"} `
                    -Method PUT `
                    -ContentType "multipart/form-data" `
                    -InFile $tmpPath
            }finally{
                if (Test-Path $tmpPath){
                    Remove-Item $tmpPath
                }
            }
        }
    }
    function Get-DatabaseNames{
    	param(
    		[Parameter(Mandatory = $true)]
    		[string]$ResourceGroupName,
    		[Parameter(Mandatory = $true)]
    		[string]$AppServiceName,
    		[Parameter(Mandatory = $true)]
    		[string]$DatabaseNameRoot,
    		[string]$SlotName = string::Empty
    
    	)
    	$contents = (Get-FileFromWebApp -resourceGroupName $ResourceGroupName -webAppName $AppServiceName -slotName $SlotName -kuduPath "App_Config/ConnectionStrings.config") | Out-String
    	if ($contents.Contains("$DatabaseNameRoot-2")){
    		$ret = @{
    			InactiveDatabase = $DatabaseNameRoot
    			ActiveDatabase = $DatabaseNameRoot + '-2'
    		}
    	}elseif ($contents.Contains("$DatabaseNameRoot")){
    		$ret = @{
    			InactiveDatabase = $DatabaseNameRoot + '-2'
    			ActiveDatabase = $DatabaseNameRoot
    		}
    	}else{
            throw "unable to find $DatabaseNameRoot OR $DatabaseNameRoot-2"
        }
    	return $ret
    }
    
    

    Step 1

    Copy the Production slot to a Staging slot

    param(
        [Parameter(Mandatory=$true)]
        [string]$ResourceGroupName,
        [Parameter(Mandatory=$true)]
        [string]$AppServiceName
    )
    . "$PSScriptRoot\Get-KuduUtility.ps1"
    $existingSlot = Get-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot "Staging" -ErrorAction SilentlyContinue
    if ($null -ne $existingSlot){
        write-host "Removing existing Staging slot"
        Remove-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot "Staging" -Force
        Start-Sleep -s 10
    }
    $slot = Get-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot "Production"
    New-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot "Staging" -AppServicePlan $slot.ServerFarmId
    
    Copy-AppServiceToStaging -ResourceGroupName $ResourceGroupName -WebAppName $AppServiceName
    

    Step 2

    Make copies of all of your content delivery databases and wire your CM environment to the new databases

    param(
        [Parameter(Mandatory=$true)]
        [string]$ResourceGroupName,
        [Parameter(Mandatory=$true)]
        [string]$AppServiceName,
        [Parameter(Mandatory=$true)]
        [string]$CDAppServiceName,
        [string]$SlotName = string::Empty,
        [Parameter(Mandatory=$true)]
        [string]$DatabaseNameRoot,
        [Parameter(Mandatory=$true)]
        [string]$SqlServerName
    )
    
    . "$PSScriptRoot\Get-KuduUtility.ps1"
    $contents = (Get-FileFromWebApp -resourceGroupName $ResourceGroupName -webAppName $AppServiceName -slotName $SlotName -kuduPath "App_Config/ConnectionStrings.config") | Out-String
    $db = Get-DatabaseNames -ResourceGroupName $ResourceGroupName -AppServiceName $CDAppServiceName -DatabaseNameRoot $DatabaseNameRoot -SlotName $SlotName
    $contents = $contents.Replace("Catalog=$($db.ActiveDatabase);", "Catalog=$($db.InactiveDatabase);")
    
    
    $tst = Get-AzureRmSqlDatabase -DatabaseName $db.InactiveDatabase -ServerName $SqlServerName -ResourceGroupName $ResourceGroupName -ErrorAction SilentlyContinue
    if ($null -ne $tst){
        throw "Unable to copy database when the CM environment is referencing $($db.ActiveDatabase) and $($db.InactiveDatabase) already exist.  Make sure that both the tenant CD AND the CM environment are using the same database before this operation and delete the unused database and try again."
    }
    $tst = Get-AzureRmSqlDatabase -DatabaseName $db.ActiveDatabase -ServerName $SqlServerName -ResourceGroupName $ResourceGroupName -ErrorAction SilentlyContinue
    write-host "Copying database $($db.ActiveDatabase) to $($db.InactiveDatabase)"
    $parameters = @{
        ResourceGroupName = $ResourceGroupName
        DatabaseName = $db.ActiveDatabase
        ServerName = $SqlServerName
        CopyResourceGroupName = $ResourceGroupName
        CopyServerName = $SqlServerName
        CopyDatabaseName = $db.InactiveDatabase
    }
    if (-not [string]::IsNullOrWhitespace($tst.ElasticPoolName)){
        $parameters["ElasticPoolName"] = $tst.ElasticPoolName
    }
    New-AzureRmSqlDatabaseCopy @parameters
    Write-FileToWebApp -resourceGroupName $ResourceGroupName -webAppName $AppServiceName -fileContent $contents -slotName $SlotName -kuduPath "App_Config/ConnectionStrings.config"
    
    

    NOTE: This accesses the CD app service to determine the offline database. It toggles between {name} and {name}-2
    NOTE: DatabaseNameRoot refers to the database name without the -2 on it.
    NOTE: This assumes the assets all share a resource group, if that’s not true, add some more parameters

    Step 3

    Copy all content delivery app services to a Staging slot.

    param(
        [Parameter(Mandatory=$true)]
        [string]$ResourceGroupName,
        [Parameter(Mandatory=$true)]
        [string]$AppServiceName
    )
    . "$PSScriptRoot\Get-KuduUtility.ps1"
    $existingSlot = Get-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot "Staging" -ErrorAction SilentlyContinue
    if ($null -ne $existingSlot){
        write-host "Removing existing Staging slot"
        Remove-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot "Staging" -Force
        Start-Sleep -s 10
    }
    $slot = Get-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot "Production"
    New-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot "Staging" -AppServicePlan $slot.ServerFarmId
    
    Copy-AppServiceToStaging -ResourceGroupName $ResourceGroupName -WebAppName $AppServiceName
    

    NOTE: This assumes that your deployment process deploys all of Sitecore except things in App_Data and environment specific App_Config config files like ConnectionStrings.config

    Step 4

    Execute a deploy as you normally would while targeting the production slot for CM and the staging slot for CD.

    Step 5

    Test your changes on CM and the CD Staging slots

    NOTE: these scripts assume that your deployment process handles all non-dynamically generated assets. So things like the Sitecore folder would be included in your deployment process whereas things like your license.xml or ConnectionStrings.config would not be. These things would be handled by the app service copy in Step 3
    NOTE: if you need to hotfix, you can repeat step 4

    Step 6

    Swap production slot and staging slots in your CD app services

    param(
        [Parameter(Mandatory=$true)]
        [string]$ResourceGroupName,
        [Parameter(Mandatory=$true)]
        [string]$AppServiceName,
        [string]$SlotName = "Staging"
    )
    
    Switch-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -DestinationSlotName "Production" -SourceSlotName $SlotName
    

    NOTE: At this time your change is live and there has been no downtime

    Step 7

    Clean up.
    It’s important to note that you can delay this step to provide a rapid rollback if needed. Before completing this step, swapping the slots again will give us a rollback in seconds.
    Remove the old content delivery databases.

    
    param(
        [Parameter(Mandatory=$true)]
        [string]$ResourceGroupName,
        [Parameter(Mandatory=$true)]
        [string]$AppServiceName,
        [Parameter(Mandatory=$true)]
        [string]$SqlServerName,
        [Parameter(Mandatory=$true)]
        [string]$DatabaseNameRoot,
        [switch]$DeleteActive = $false,
        [string]$SlotName = ""
    )
    . "$PSScriptRoot\Get-KuduUtility.ps1"
    
    $db = Get-DatabaseNames -ResourceGroupName $ResourceGroupName -AppServiceName $AppServiceName -DatabaseNameRoot $DatabaseNameRoot -SlotName $SlotName
    
    if ($DeleteActive){
        Remove-AzureRmSqlDatabase -ResourceGroupName $ResourceGroupName -ServerName $SqlServerName -DatabaseName $db.ActiveDatabase -Force
    }else{
        Remove-AzureRmSqlDatabase -ResourceGroupName $ResourceGroupName -ServerName $SqlServerName -DatabaseName $db.InactiveDatabase -Force
    }
    

    Remove the staging slots for each environment, CD app services and CM app service

    param(
        [string]$ResourceGroupName,
        [string]$AppServiceName,
        [string]$SlotName
    )
    
    Remove-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot $SlotName -Force
    

    That’s it, you’re done and there was no downtime, feels good doesn’t it?

    What if we need to roll back?

    If you’ve gotten to step 7 and find that the code is flawed and won’t be able to hotfix or you need an emergency content change this is how you roll back.

    Step 1

    Swap the CM staging slot to production (this will contain the database connections to the old database)

    param(
        [Parameter(Mandatory=$true)]
        [string]$ResourceGroupName,
        [Parameter(Mandatory=$true)]
        [string]$AppServiceName,
        [string]$SlotName = "Staging"
    )
    
    Switch-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -DestinationSlotName "Production" -SourceSlotName $SlotName
    
    

    Step 2

    Then delete the NEW databases you created in step 2

    param(
        [Parameter(Mandatory=$true)]
        [string]$ResourceGroupName,
        [Parameter(Mandatory=$true)]
        [string]$AppServiceName,
        [Parameter(Mandatory=$true)]
        [string]$SqlServerName,
        [Parameter(Mandatory=$true)]
        [string]$DatabaseNameRoot,
        [switch]$DeleteActive = $false,
        [string]$SlotName = string::Empty
    )
    . "$PSScriptRoot\Get-KuduUtility.ps1"
    
    $db = Get-DatabaseNames -ResourceGroupName $ResourceGroupName -AppServiceName $AppServiceName -DatabaseNameRoot $DatabaseNameRoot -SlotName $SlotName
    
    if ($DeleteActive){
        Remove-AzureRmSqlDatabase -ResourceGroupName $ResourceGroupName -ServerName $SqlServerName -DatabaseName $db.ActiveDatabase -Force
    }else{
        Remove-AzureRmSqlDatabase -ResourceGroupName $ResourceGroupName -ServerName $SqlServerName -DatabaseName $db.InactiveDatabase -Force
    }
    

    Step 3

    then remove the stanging slots

    
    param(
        [string]$ResourceGroupName,
        [string]$AppServiceName,
        [string]$SlotName
    )
    
    Remove-AzureRmWebAppSlot -ResourceGroupName $ResourceGroupName -Name $AppServiceName -Slot $SlotName -Force
    

    Step 4

    Finally you’d run either a TDS sync or Unicorn sync to get your developer owned assets back to pre-deployment state.

    Solving secondary issues

    Search index

    For the search index you’ll want to treat it in the same way we treated Databases. At the start of the blue/green process we’ll create a new one as a clone from the production facing one and rewire the CM environment and CD staging slots to use the new Search index.

    You would do this if you use the search index to source content to end users. Primarily you’d see this in a site search. The issue would be if you add pages to your offline web database the search index would pick those up and possibly return links to end users that are 404s or if the code tries to get the Sitecore item you may end up with null exceptions.

    This is optional because with proper governance you can mitigate the risk. Things like content freezes or publishing freezes would work fine.

    With this safely implemented you could potentially lift any content author freezes. As long as the authors know that during a blue/green deployment their changes are published to the offline environment until the swap is completed.

    XConnect

    You would want to have 2 parallel XConnect environments, one that’s always customer facing and one that’s always not customer facing. During a blue/green deployment you’d want to have your CM environment and the CD staging slots pointed at the offline always XConnect environment. Then immediately before the swap you’ll want to rewire the CM and CD staging slots to point to the online only XConnect environment.

    You would do this if you wanted to be certain that there was no testing data in XConnect.

    This is optional because most people wouldn’t mind a bit of testing data in their analytics data.