SXA Advanced Dictionary

Download the helix foundation project HERE

How do you handle basic content snippets?

google-stop-words
People generally have strong opinions on where simple phrases or single words should be stored in order to properly localize them. This is normally 2 camps.

Store simple content in standard values

One camp that stores simple phrases or single words in the standard values of the templates, this allows for more flexibility on a case by case basis but makes it hard to change them wholesale.

Using stock Dictionaries

The second camp is to use the dictionary, but that comes with it’s own problems, particularly for keeping helix pure and having a component own it’s own Sitecore Items, additionally in SXA you need to worry about utilizing components across different sites and tenants that own a completely different dictionary location.

Either way is not very good

both of these options come with pretty serious problems that place significant tech debt style nastiness on the content authors in terms of flexibility

Enter the AutoDictionary

robot

Automatically creates your dictionary items if they don’t exist.

Allows authors to optionally edit the dictionary items from the EE.

SXA site component sharing automatically handled.

Traditionally a dictionary key will be pathed using .‘s like so:

Carousel.Labels.Next

This would look for the dictionary definition with that key. Traditionally it would be located at the path:

Dictionary/Carousel/Labels/Next

Using this information we know where the dictionary definition SHOULD be.
With the addition of a default text block we can create these dictionary items automatically. whereas a traditional dictionary would output nothing.

<span class="btn">@Html.AutoTranslate("Carousel.Labels.Next", "Next")</span>

The below would be EE authorable

<span class="btn">@Html.AutoTranslate("Carousel.Labels.Next", "Next", true)</span>

Sitecore Analytics Errors

ERROR [Experience Analytics]: System.Net.WebException: The remote name could not be resolved: 'reportingserviceurl'
   at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
   at System.Net.HttpWebRequest.GetRequestStream()
   at Sitecore.Xdb.Reporting.Datasources.Remote.RemoteReportDataSourceProxy.GetData(ReportDataQuery query)
   at Sitecore.Xdb.Reporting.ReportDataProvider.ExecuteQueryWithCache(ReportDataQuery query, ReportDataSource dataSource, CachingPolicy cachingPolicy)
   at Sitecore.Xdb.Reporting.ReportDataProvider.GetData(String dataSourceName, ReportDataQuery query, CachingPolicy cachingPolicy)
   at Sitecore.ExperienceAnalytics.Core.Repositories.SiteRemoteReader.GetEntities(String sqlQuery)
   at Sitecore.ExperienceAnalytics.Core.Repositories.SiteRemoteReader.GetAll(NameValueCollection readingPreferences)
   at Sitecore.ExperienceAnalytics.Core.Repositories.CachedReaderDecorator`2.GetAll(NameValueCollection readingPreferences)
   at Sitecore.ExperienceAnalytics.Core.Repositories.SiteFilter.FilterReaderDecorator`2.GetAll(NameValueCollection readingPreferences)
   at Sitecore.ExperienceAnalytics.Client.RenderingHelper.GetSiteComboBoxItems()

if you’re getting this error message, it’s likely that your configurations are missing the URL to the reporting service.

On the CM server modify the configuration file at:
\wwwroot\App_Config\Sitecore\Azure\Sitecore.Xdb.Remote.Client.CM.config

notice that there are 2x spots for URLs. If those locations have a dummy placeholder URLs then something went awry with the original setup. Instead replace the placeholder urls with your rep and prc service urls.

Azure Search Missing Target Dropdown

Missing options in the target dropdown for the general link’s internal link form? The options are sourced by the search index for some reason.

check if a simple reindex of your core index will do the trick

If you’ve already tried that and still no dice, you may run into the same issue i did. After going to Sitecore Support i got a few good pieces of information.

  1. In order to use Azure search in Sitecore you need to limit the fields indexed by Sitecore. Typically done with <indexAllFields>false</indexAllFields>
  2. There are some fields required by SPEAK to make these forms work properly

The Solution

There are a few templates and fields that need to be available for this functionality to work properly. Make sure your solution has these standard configuration nodes set up.

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:search="http://www.sitecore.net/xmlconfig/search/">
  <sitecore role:require="ContentManagement or ContentDelivery" search:require="azure">
    <contentSearch>
      <indexConfigurations>
        <defaultCloudIndexConfiguration>
          <documentOptions>
            <include hint="list:AddIncludedTemplate">
              <StandardTemplate>{1930BBEB-7805-471A-A3BE-4858AC7CF696}</StandardTemplate>
              <CommonText>{76F63DF7-0235-4164-86AB-84B5EC48CB2A}</CommonText>
            </include>
            <include hint="list:AddIncludedField">
              <fieldId>{8CDC337E-A112-42FB-BBB4-4143751E123F}</fieldId>
              <hidden>{39C4902E-9960-4469-AEEF-E878E9C8218F}</hidden>
            </include>
          </documentOptions>
        </defaultCloudIndexConfiguration>
      </indexConfigurations>
    </contentSearch>
  </sitecore>
</configuration>

Azure Search replication

If you’re trying to get a geo-replicated disaster recovery site set up and you’re using Azure Search you likely ran into the same issue that i did. Azure Search simply does not have the geo-replication tools or abilities that SQL does. This becomes all the more frustrating by the fact that it’s literally the only PAAS element in the Sitecore ecosystem that doesn’t have this functionality. If you don’t have the luxury of being able to re-index your data rapidly, you’re stuck waiting for the data to index. In the context of Sitecore this can take several hours on particularly large sites.

Additionally this can be problematic when dealing with Blue/Green deployments as customer facing content could and should be included in your search index. This problem can be solved in a similar fashion. When added to this method of zero downtime deployments it can give a more complete and safe deployment.

Using a Azure Search Index as a source

Any data processing you needed to do to populate your primary index can be skipped if you simply utilize one main azure search index as the source for the second. I have this brokered through an Azure Function.
AzureFunctionReplicationFlow

The code for the function is as follows:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;


namespace BendingSitecore.Function
{
    public static class AzureSearchReplicate
    {
        [FunctionName("AzureSearchReplicate")]
        public static async Task Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
            ILogger log)
        {
	        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            dynamic data = JsonConvert.DeserializeObject(requestBody);

			IEnumerable indexes = Enumerable.Empty();
			if (data.indexes != null){
				indexes = ((JArray)data.indexes).Select(x => (string)x);
			} 
	        try
	        {
		        Start(new SearchServiceClient(data.source.ToString(), new SearchCredentials(data.sourceKey.ToString()))
			        , new SearchServiceClient(data.destination.ToString(), new SearchCredentials(data.destinationKey.ToString())),false, log, indexes ?? Enumerable.Empty());
			}
	        catch (Exception e)
	        {
				log.LogError(null, e, "An Error occurred");
		        return new BadRequestObjectResult("Require a json object with source, destination and keys.");
			}

	        return  new OkObjectResult($"Azure Search replication is running, should be finished in about 10 minutes.");
        }
		public static void Start(SearchServiceClient source, SearchServiceClient destination, bool wait,
            ILogger log, IEnumerable indexes)
		{
			List tasks = new List();
			ClearAllIndexes(destination, indexes);
			foreach (var index in source.Indexes.List().Indexes.Where(x => !indexes.Any() || indexes.Any(i => i.StartsWith(x.Name))))
			{
				tasks.Add(Task.Run(async () =>
				{
					try
					{
						destination.Indexes.Get(index.Name);
					}
					catch (Exception e)
					{
						log.LogInformation($"creating index {index.Name}", null);
						destination.Indexes.Create(index);
						await Task.Delay(5000);
					}
					await MigrateData(source.Indexes.GetClient(index.Name),
						destination.Indexes.GetClient(index.Name), log);
				}));
			}
			if (wait)
			{
				foreach (var task in tasks)
				{
					task.Wait();
				}
			}
		}

		public static void ClearAllIndexes(SearchServiceClient client, IEnumerable indexes)
		{
			foreach (var index in client.Indexes.List().Indexes.Where(x => !indexes.Any() || indexes.Any(i => i.StartsWith(x.Name))))
			{
				client.Indexes.Delete(index.Name);
			}
		}

		public static async Task MigrateData(ISearchIndexClient source, ISearchIndexClient destination,
            ILogger log)
		{
			log.LogInformation($"Starting migration of data for {source.IndexName}", null);
			SearchContinuationToken token = null;
			var searchParameters = new SearchParameters { Top = int.MaxValue };
			int retryCount = 0;
			while (true)
			{
				DocumentSearchResult results;
				if (token == null)
				{
					results = await source.Documents.SearchAsync("*", searchParameters);
				}
				else
				{
					results = await source.Documents.ContinueSearchAsync(token);
				}
				try
				{
					await destination.Documents.IndexAsync(IndexBatch.New(GetAction(destination, results)));
				}
				catch (Exception e)
				{
					log.LogError(e, "Error occurred writing to destination", null);
					log.LogInformation("Retrying...", null);
					retryCount++;
					if (retryCount > 10){
						log.LogError("Giving up...", null);
						break;
					}
					continue;
				}
				if (results.ContinuationToken != null)
				{
					token = results.ContinuationToken;
					continue;
				}

				break;
			}
			log.LogInformation($"Finished migration data for {source.IndexName}", null);
		}

		public static IEnumerable<IndexAction> GetAction(ISearchIndexClient client, DocumentSearchResult documents)
		{
			return documents.Results.Select(doc => IndexAction.MergeOrUpload(doc.Document));
		}
    }
}

Additionally make sure your Azure function has these configuration settings.


        AzureWebJobDashboard                     = "DefaultEndpointsProtocol=https;AccountName=$storageName;AccountKey=$accountKey"
        AzureWebJobsStorage                      = "DefaultEndpointsProtocol=https;AccountName=$storageName;AccountKey=$accountKey"
        FUNCTIONS_EXTENSION_VERSION              = "~2"
        FUNCTIONS_WORKER_RUNTIME                 = "dotnet"
        WEBSITE_NODE_DEFAULT_VERSION             = "8.11.1"
        WEBSITE_RUN_FROM_PACKAGE                 = "1"
        WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "DefaultEndpointsProtocol=https;AccountName=$storageName;AccountKey=$accountKey"
        WEBSITE_CONTENTSHARE                     = "$storageName"
        AzureWebJobsSecretStorageType            = "Files"

Running your function

Execute the function code using a raw json request body like so:

{
    "destination":  "[standby azure search name]",
    "destinationKey":  "[standby azure search key]",
    "source":  "[primary azure search name]",    
    "sourceKey":  "[primary azure search key]",
    "indexes":  null
}

Note: If you want to specify a particular index to manage you may pass in a json array of indexes to manage. It will only clear/refresh the indexes specified or if null it will clear/refresh all indexes

Automating

Through powershell there are ways to create and execute the Azure function given a valid Azure context and some desired names and resource groups. Expect to see a blog post on that shortly in the future.

Dude, Where’re my logs? (Azure)

If you’re new to the world of Sitecore in Azure PaaS then there’s a good chance that you popped open kudu and browsed to the App_Data/Logs folder and said to yourself “oh yeah, it’s in Application Insights or something…”. Then after going to Application insights and pushing buttons haphazardly arrived at something that kind of looks like log. It can be confusing and concerning to feel like you have an inability to debug problem. I’m going to go over the various ways of retrieving debug information for your Sitecore App Services.

Application Insights

This is where the vast majority of your logs are going to be, it’s not a great format and leaves me wanting more from the tool, but here’s how you use it:

  1. Navigate to your sites Application insights Azure resource
  2. In the Overview tab select the Analytics buttonAIAnalytics
  3. Under the table traces execute a query
  4. traces
    | where customDimensions.Role == “CM” and severityLevel == 3
    AIQuery
  5. The results will not be ordered properly, make sure you click the column header for timestamp to order by date
  6. Application insights has some handy auto-complete features to help you build a custom query to get exactly the data you’re looking for

NOTEWhile Application insights provides a good way to track and query log data, there does seem to be particular cases where the application does not properly submit log data to Application Insights. This leads us to the next Method.

Log Streaming

A more root level logging solution is the log streaming option offered by the App Service. This can provide a more reliable but less pleasant source of logs, this is good if you have an easily reproducible scenario. This will provide appropriate data in a more traditional format that many Sitecore developers will be more comfortable with. This option can give you more accurate and complete logging. It is important to note however that the logs get placed on the filesystem, so that will effect your filesystem size.

  1. Open the Diagnostics logs tab and turn on all the streaming logs settings.DiagnosticsLogs.png
  2. In the log stream you will now see logs coming in at real timeLogStream

Application logs

Some IIS level events and errors will find their way into the underlying filesystem, you can use Kudu to access them.

  1. First you need to access Kudukudu.png
  2. Using either the cmd or powershell Debug console navigate to D:\home\LogFiles and open eventlog.xmleventlog
  3. Here you will find IIS events and errors that may uncover more catastrophic errors that fail to be recorded in Application Insights

Azure App Service Logging

Sometimes despite all other options the problem persists, this is when we must view Azure health as on occasion without notification Azure events will impact our environments negatively.

  1. On the App Service select the Diagnose and solve problems tabDiagnoseAndSolve
  2. There are several reports in this interface that are definately worth an in depth look. I’ll focus on the Web App Restarted report.
    If you find that your app pool seems to be recycling too often, this is probably where you need to look.webappRestarted
  3. This report will give you any reason that Azure will have restarted your App Service

Remotely triggering Sitecore Operations in Azure

Sometimes you want your build/release system to execute some Sitecore process.  I’ve found that this method works pretty well.  To date I’ve used this same model to:

  1. Sitecore Publish
  2. Index rebuild
  3. Package install

Note: this is an adaptation of the strategy pioneered in this blog post

There are three separate concerns in this technique

  1. Web service to perform the operation
  2. Kudu for dynamic service installation
  3. Powershell to execute the operation

Step 1 – create the service

This code will run a publish


using System.Collections.Generic;
using System.Linq;
using System.Web.Services;
using Sitecore.Jobs;


[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[System.ComponentModel.ToolboxItem(false)]
public class PublishManager : System.Web.Services.WebService
{
	[WebMethod(Description = "Publishes all content")]
	public bool PublishAll(string token)
	{
		if (string.IsNullOrEmpty(token))
			return false;
		if (token != "[TOKEN]")
			return false;
		var db = Sitecore.Configuration.Factory.GetDatabase("master");
		var item = db.GetRootItem();
		var publishingTargets = Sitecore.Publishing.PublishManager.GetPublishingTargets(item.Database);

		foreach (var publishingTarget in publishingTargets)
		{
			var targetDatabaseName = publishingTarget["Target database"];
			if (string.IsNullOrEmpty(targetDatabaseName))
				continue;

			var targetDatabase = Sitecore.Configuration.Factory.GetDatabase(targetDatabaseName);
			if (targetDatabase == null)
				continue;

			var publishOptions = new Sitecore.Publishing.PublishOptions(
				item.Database,
				targetDatabase,
				Sitecore.Publishing.PublishMode.Smart,
				item.Language,
				System.DateTime.Now);

			var publisher = new Sitecore.Publishing.Publisher(publishOptions);
			publisher.Options.RootItem = item;
			publisher.Options.Deep = true;
			publisher.PublishAsync();
		}
		return true;
	}
	[WebMethod(Description = "Checks publish status")]
	public string[] PublishStatus()
	{
		return JobManager.GetJobs().Where(x => !x.IsDone && x.Name.StartsWith("Publish")).Select(x =>
			x.Status.Processed + " -> " + x.Name).ToArray();
	}
}

This asmx service has two methods. The first method initiates the publish to all publishing targets using the root Sitecore item as starting point. The second method checks the status. It’s important to do it this way because Azure has an shortish forced timeout that’s something like 3-5 minutes, which a publish can easilly surpass. To avoid this, we trigger the publish asynchronously then use the next method to check the status until the publish is completed.

Step 2 – Kudu powershell scripts

Note: This powershell code requires that you have an authenticated azure session to the appropriate Azure subscription.

function Get-AzureRmWebAppPublishingCredentials($resourceGroupName, $webAppName, $slotName = $null){
	if ([string]::IsNullOrWhiteSpace($slotName) -or $slotName.ToLower() -eq "production"){
		$resourceType = "Microsoft.Web/sites/config"
		$resourceName = "$webAppName/publishingcredentials"
	}
	else{
		$resourceType = "Microsoft.Web/sites/slots/config"
		$resourceName = "$webAppName/$slotName/publishingcredentials"
	}
	$publishingCredentials = Invoke-AzureRmResourceAction -ResourceGroupName $resourceGroupName -ResourceType $resourceType -ResourceName $resourceName -Action list -ApiVersion 2015-08-01 -Force
    	return $publishingCredentials
}

function Get-KuduApiAuthorisationHeaderValue($resourceGroupName, $webAppName, $slotName = $null){
    $publishingCredentials = Get-AzureRmWebAppPublishingCredentials $resourceGroupName $webAppName $slotName
    $ret = @{}
    $ret.header = ("Basic {0}" -f [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $publishingCredentials.Properties.PublishingUserName, $publishingCredentials.Properties.PublishingPassword))))
    $ret.url = $publishingCredentials.Properties.scmUri
    return $ret
}

function Get-FileFromWebApp($resourceGroupName, $webAppName, $slotName = "", $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"

    Write-Host " Downloading File from WebApp. Source: '$kuduApiUrl'." -ForegroundColor DarkGray
    $tmpPath = "$($env:TEMP)\$([guid]::NewGuid()).xml"
    $null = Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method GET `
                        -ContentType "multipart/form-data" `
                        -OutFile $tmpPath
    $ret = Get-Content $tmpPath | Out-String
    Remove-Item $tmpPath -Force
    return $ret
}

function Write-FileToWebApp($resourceGroupName, $webAppName, $slotName = "", $fileContent, $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"

    Write-Host " Writing File to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray

    Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method Put `
                        -ContentType "multipart/form-data"`
                        -Body $fileContent
}
function Write-FileFromPathToWebApp($resourceGroupName, $webAppName, $slotName = "", $filePath, $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"

    Write-Host " Writing File to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray

    Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method Put `
                        -ContentType "multipart/form-data"`
                        -InFile $filePath
}

function Write-ZipToWebApp($resourceGroupName, $webAppName, $slotName = "", $zipFile, $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/zip/site/wwwroot/$kuduPath"

    Write-Host " Writing Zip to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray

    Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method Put `
                        -ContentType "multipart/form-data"`
                        -InFile $zipFile
}
function Remove-FileFromWebApp($resourceGroupName, $webAppName, $slotName = "", $kuduPath){
    $KuduAuth = Get-KuduApiAuthorisationHeaderValue $resourceGroupName $webAppName $slotName
    $kuduApiAuthorisationToken = $KuduAuth.header
    $kuduApiUrl = $KuduAuth.url + "/api/vfs/site/wwwroot/$kuduPath"

    Write-Host " Writing File to WebApp. Destination: '$kuduApiUrl'." -ForegroundColor DarkGray

    Invoke-RestMethod -Uri $kuduApiUrl `
                        -Headers @{"Authorization"=$kuduApiAuthorisationToken;"If-Match"="*"} `
                        -Method Delete `
                        -ContentType "multipart/form-data"
}

This is a collection of kudu utilities to manage files on an app service, which is important for what we’re going to do next.

Step 3 – Manage the publish

param(
	[Parameter(Mandatory=$true)]
    [string]$ResourceGroupName,
    [Parameter(Mandatory=$true)]
    [string]$AppServiceName
)
. "$PSScriptRoot\Get-KuduUtility.ps1"


$folderKey = -join ((97..122) | Get-Random -Count 10 | ForEach-Object {[char]$_})
$accessKey = -join ((97..122) | Get-Random -Count 10 | ForEach-Object {[char]$_})
try{
    (Get-Content "$PSScriptRoot\PublishManager.asmx").Replace("[TOKEN]", $accessKey) | Set-Content "$PSScriptRoot\tmp.asmx"
	Write-FileFromPathToWebApp -resourceGroupName $ResourceGroupName -webAppName $AppServiceName -slotName "" -filePath "$PSScriptRoot\tmp.asmx" -kuduPath "PublishManager/$folderKey/PublishManager.asmx"
	Remove-Item "$PSScriptRoot\tmp.asmx" -Force
	$site = Get-AzureRmWebApp -ResourceGroupName $ResourceGroupName -Name $AppServiceName
	$webURI= "https://$($site.HostNames | Select-Object -Last 1)/PublishManager/$folderKey/PublishManager.asmx?WSDL"
    try{
        $null = Invoke-WebRequest -Uri $webURI -UseBasicParsing
    }catch{
        $null = Invoke-WebRequest -Uri $webURI -UseBasicParsing 
    }
	$proxy = New-WebServiceProxy -uri $webURI
    $proxy.Timeout = 1800000
    $ready = $proxy.PublishAll($accessKey)

	if (-not $ready){
		throw "Unable to publish, check server logs for details."
	}
    Write-Host "Starting publish process and scanning for progress."
	for ($i = 0; $i -lt 180; $i++) {
		$done = $true
		$proxy.PublishStatus() | ForEach-Object {
			$done = $false
			write-host $_
		}
		write-host "***********  $($i * 20) Seconds **********"
		if ($done){
            Write-Host "Publish Completed."
			break
		}
		Start-Sleep -Seconds 20
		if ($i -eq 179){
			write-host "Sitecore Publish Timeout."
		}
	}
}finally{
	Write-Host "Removing Sitecore Publish service"
	Remove-FileFromWebApp -resourceGroupName $ResourceGroupName -webAppName $AppServiceName -slotName "" -kuduPath "PublishManager/$folderKey/PublishManager.asmx"
	Remove-FileFromWebApp -resourceGroupName $ResourceGroupName -webAppName $AppServiceName -slotName "" -kuduPath "PublishManager/$folderKey"
}

There’re a few important components to this script:

  1. The script generates a couple guids, first obfuscates the service path, the second is used as a security key that must be passed into the service. The service is modified to write this security key into the asmx file prior to uploading to Azure.
  2. Uploads the modified asmx file to be dynamically compiled and utilized
  3. Executes the initialize publish service method
  4. Calls the status method every 20 seconds and outputs the current status
  5. Removal of the service for security purposes

Sitecore Sidekick 1.5 – The Need for Speed

A huge thank you to Michael West for sparking many of these ideas.

The Release Candidate for Sidekick 1.5 is now available.  Bringing with it many improvements.

Importantly this includes a fix that lets Sitecore Sidekick function properly with the latest version of Rainbow (used by the latest version of Unicorn).

We can make it go FASTER

speedy.gif

Introducing Data Blaster integration.

Data blaster is a tool for Sitecore that manages bulk item creation in a different way. In a nutshell it stages all the items into a temp table and merges them all in at the same time dramatically reducing item creation time.

The results are incredible.  Below is a test of 4095 item creates.

scStats

However Data Blaster is for bulk inserts primarily and starts to become less effective for updates.  This is the same test of 4095, but with updates rather than inserts.  As you can see the benefit of Data blaster drops off and Content Migrator shines.  Also noteworthy is that SC packages seem to be much worse at updates.  This is why Content Migrator only uses Data Blaster when items need to be created, and not updated.

scUpdateStats.png

With the introduction of Data blaster there is a new advanced option.  If for some reason you don’t want to use it.

newAdvancedOptions

Other speed improvements

Rev id for initial comparison

Using the rev id for comparisons.  previously content would be brought down from the remote server at which point the system decided if it needed to install it or skip it.  However now it sends the content revision id along with the request, if the revision ids match the items are equivalent (the vast majority of the time, hence the ignore rev id option in the image above).  This especially makes media syncs faster if there are media files that should be skipped they aren’t even brought over the network.

Larger content packages

Content migrator now batches item requests together.  Parents and children are combined (unless media items, which still go one at a time).  The result is a dramatic decrease in requests made to the server and a resulting performance increase.

Notable bug fixes

Diff Generation

There was a pesky bug in the generation of the diff that would result in some unusual occurrences that would most likely lead to false positives for changes.  This has been fixed along with smarter diff generation that will result in less processing time.  It still retains it’s immediately available diff functionality.