Replacing String Tokens in Powershell

Often times in devops we find ourselves wanting to replace tokens related to environmental properties at the time of release. Token replacement is a popular method to do so and in many build platforms there are tools to provide this functionality. However if you find yourself unable to use the built in tools, or find that they aren’t sufficient then rolling your own can be fairly straightforward.

Replace tokens with environment variables.

This will replace any tokens found in the string and replace the tokens found inside with environment variables with the same name. For example if you have and environment variable named foo that had a value of bar all instances of __foo__ would be replaced with bar.

This function operates with a single iteration of the string in question, replacing strings as it goes along. This makes it as optimal as possible while also being flexible enough to handle any token format.

function Find-ReplaceToken {
    [CmdletBinding()]
	param(
        [Parameter(Mandatory = $true)]
        [string]$String, #String that you want to replace tokens in
        [string]$TokenPrefix = "__",
        [string]$TokenSuffix = "__"
    )
    $ret = [System.Text.StringBuilder]::new($String)
    $found = New-Object 'System.Collections.Stack'
    $charArr = $String.ToCharArray() 
    $start = -1
    $stop = -1
    $token = [System.Text.StringBuilder]::new()
    For ($i=0; $i -le $charArr.Length; $i++) {
        if ($start -ne -1){
            $null = $token.Append($charArr[$i])
        }
        if ($charArr[$i] -eq "`n"){
            $start = -1
            $stop = -1
            $null = $token.Clear()
        }
        elseif($start -ne -1 -and $String.Substring($i-$TokenPrefix.Length, $TokenPrefix.Length) -eq $TokenPrefix -and $charArr[$i - $TokenPrefix.Length] -eq $TokenPrefix[$TokenPrefix.Length-1]){
            $start = -1
            $stop = -1
            $null = $token.Clear()
            $i--
        }
        elseif ($start -ne -1 -and $i -lt $String.Length - $TokenPrefix.Length -and $String.Substring($i-$TokenSuffix.Length+1, $TokenSuffix.Length) -eq $TokenSuffix){
            $stop = $i+1
            $found.Push([System.Tuple]::Create($start, $stop, $token.ToString()))
            write-host "TOKEN FOUND - $($token.ToString())"
            $i--
            $start = -1
            $stop = -1
            $null = $token.Clear()
    }
        elseif ($i -ge $TokenPrefix.Length -and $String.Substring($i-$TokenPrefix.Length, $TokenPrefix.Length) -eq $TokenPrefix){
            if ($start -eq -1){
                $start = $i-$TokenPrefix.Length
                $null = $token.Append($TokenPrefix)
                $null = $token.Append($charArr[$i])
            }
        }
    }
    $replacedTokens = $false
    while ($found.Count -gt 0){
        $t = $found.Pop()
        $var = $t.Item3.TrimStart($TokenPrefix).TrimEnd($TokenSuffix)
        if ($null -ne [System.Environment]::GetEnvironmentVariable($var)){
            write-host "REPLACING $($t.Item3) with $([System.Environment]::GetEnvironmentVariable($var))"
            $replacedTokens = $true
            $null = $ret.Remove($t.Item1, $t.Item2 - $t.Item1)
            $null = $ret.Insert($t.Item1, [System.Environment]::GetEnvironmentVariable($var), 1)
        }else{
            write-host "Environment Variable $var not found."
        }
    }
    if ($replacedTokens){
        return $ret.ToString()
    }
    return [string]::Empty
}

Replacing file content

Passing in the root path of the files with tokens and an extensions filter starts the operation and recurses it over all files searching for tokens

param(
    [string]$path,
    [string[]]$extensions
)

# Reference to the string replace function above

gci $path -Include $extensions -Recurse -File | foreach{
    $contents = Find-ReplaceToken -String "$(Get-Content $_.FullName -Raw)"
    if (-not [string]::IsNullOrWhiteSpace($contents)){
        write-host "Replacing tokens in file: $($_.FullName)"
        Set-Content -Path $_.FullName -Value $contents
    }
}

Don’t want to use Environment variables?

By making a few simple changes you can make the function operate using a standard hashtable

function Find-ReplaceToken {
    [CmdletBinding()]
	param(
        [Parameter(Mandatory = $true)]
        [string]$String, #String that you want to replace tokens in
        [string]$TokenPrefix = "__", 
        [string]$TokenSuffix = "__",
        [hashtable]$Tokens # Add this parameter to pass in a hashtable instead of environment variables
    )
# ...
    while ($found.Count -gt 0){
        $t = $found.Pop()
        $var = $t.Item3.TrimStart($TokenPrefix).TrimEnd($TokenSuffix)
        if ($null -ne $Tokens[$var]){ # Replace the environment variable with the hashtable here
            write-host "REPLACING $($t.Item3) with $([System.Environment]::GetEnvironmentVariable($var))"
            $replacedTokens = $true
            $null = $ret.Remove($t.Item1, $t.Item2 - $t.Item1)
            $null = $ret.Insert($t.Item1, $Tokens[$var], 1) # Replace the environment variable with the hashtable here
        }else{
            write-host "Environment Variable $var not found."
        }
    }

Advanced Extending SXA Scriban

GitHub - scriban/scriban: A fast, powerful, safe and lightweight scripting  language and engine for .NET

The Basic Model

Adding simple Scriban functions to Sitecore’s SXA is fairly straightforward. The following

	public class GetItemMethod : IGenerateScribanContextProcessor
	{
		private IContext _context;
		public GetItemMethod(IContext context)
		{
			_context = context ?? throw new ArgumentException(nameof(context));
		}
		public void Process(GenerateScribanContextPipelineArgs args)
		{
			args.GlobalScriptObject.Import("sc_getitem", new GetItem((object id) => { return _context.Database.GetItem(id as string); }));
		}
		private delegate Item GetItem(object id);
	}
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <generateScribanContext>
        <processor type="{YOUR NAMESPACE].GetItemMethod,[YOUR ASSEMBLY]" resolve="true" />
      </generateScribanContext>
    </pipelines>
  </sitecore>
</configuration>

This allows you to use the scriban function {sc_getItem ‘[MY GUID]’ } and have it return the item in the current context database that has that ID. This model works great when you’re working with basic primitives and Sitecore items, but loses it’s power when you want something more powerful.

Advanced Dictionary method

Making use of SXA’s dictionary feature is great and i tend to do it a lot. In an MVC world i tended to go with my https://jeffdarchuk.com/2019/10/23/sxa-advanced-dictionary/ approach. Bringing in the code from this other blogpost, below is the additions to make it work with Scriban.

	public class AutoDictionaryMethods : AddFieldRendererFunction, IGenerateScribanContextProcessor
	{
		private IAutoDictionaryRepository _autoDictionary;
		public AutoDictionaryMethods(IPageMode iPageMode, IAutoDictionaryRepository autoDictionary):base(iPageMode)
		{
			_autoDictionary = autoDictionary ?? throw new ArgumentException(nameof(autoDictionary));
		}
		public new void Process(GenerateScribanContextPipelineArgs args)
		{
			this.RenderingWebEditingParams = args.RenderingWebEditingParams;
			RenderField renderField = new RenderField(this.RenderDictionaryEntry);
			args.GlobalScriptObject.Import("sc_autodictionary", renderField);
		}
		public string RenderDictionaryEntry(object key, object defaultValue, ScriptArray parameters = null)
		{
			return this.RenderFieldImpl(_autoDictionary.GetDictionaryItem(key as string, defaultValue as string), AutoDictionary.Templates.SxaDictionaryItem.Fields.Phrase, parameters);
		}
		private delegate string RenderField(object key, object defaultValue, ScriptArray parameters = null);
	}

Note: A configuration identical to the above sc_getitem is needed for this one.

Scriban Partials

Sunglasses-reaction-recursion

However there are some limitations, for example you can’t use this model to execute some additional Scriban. In .NET MVC terms I was looking for something like a partial rendering and I started wondering if it was possible. Sure enough it is, with a little creative digging. First we need to inject into the runtime scripts processor located in the renderVariantField pipeline.

	public class RuntimeScriptObjects : RenderScriban
	{
		private IVariantFieldParser _variantFieldParser;
		public RuntimeScriptObjects(IVariantFieldParser variantFieldParser, IScribanTemplateRenderer templateRenderer):base(templateRenderer)
		{
			_variantFieldParser = variantFieldParser ?? throw new ArgumentException(nameof(variantFieldParser));
		}
		public RuntimeScriptObjects(IScribanTemplateRenderer renderer) : base(renderer) { }
		public override void RenderField(RenderVariantFieldArgs args)
		{
			this.VariantTemplate = args.VariantField as Sitecore.XA.Foundation.Scriban.Fields.VariantScriban;
			if (this.VariantTemplate == null)
				return;
			if (this.VariantTemplate.ScribanTemplate == null)
				this.VariantTemplate.ScribanTemplate = this.ScribanTemplateRenderer.Parse(this.VariantTemplate.Template, this.VariantTemplate.Path);
			TemplateContext templateContext = this.ScribanTemplateRenderer.GetTemplateContext(args.IsControlEditable, args.RenderingWebEditingParams);
			templateContext.PushCulture(this.Context.Language.CultureInfo);
			string empty = string.Empty;
			string text;
			if (this.VariantTemplate.ScribanTemplate.HasErrors)
			{
				text = string.Join("<br/>", this.VariantTemplate.ScribanTemplate.Messages.Select<LogMessage, string>((Func<LogMessage, string>)(m => HttpUtility.HtmlEncode(m.ToString())))) + "<br/>";
			}
			else
			{
				this.AddRuntimeScriptObjects(templateContext, this.VariantTemplate, args);
				try
				{
					text = this.ScribanTemplateRenderer.Render(this.VariantTemplate.ScribanTemplate, templateContext);
				}
				catch (Exception ex)
				{
					Log.Error(ex.Message, ex, (object)this);
					text = HttpUtility.HtmlEncode(ex.Message);
				}
			}
			Control control = (Control)new LiteralControl(text);
			if (!string.IsNullOrWhiteSpace(this.VariantTemplate.Tag))
			{
				HtmlGenericControl tag = new HtmlGenericControl(this.VariantTemplate.Tag);
				this.AddClass(tag, this.VariantTemplate.CssClass);
				this.AddWrapperDataAttributes((RenderingVariantFieldBase)this.VariantTemplate, args, tag);
				this.MoveControl(control, (Control)tag);
				control = (Control)tag;
			}
			args.ResultControl = control;
			args.Result = this.RenderControl(args.ResultControl);
		}
		public void AddDelegateToFunction(RenderVariantFieldArgs args, ScriptObject scriptObject)
		{

			scriptObject.Import("sc_delegateto", new DelegateToDeligate((Item item, Item variant) => {
				VariantScriban variantScriban = new VariantScriban(variant);
				variantScriban.ItemName = variant.Name;
				variantScriban.Path = GetTemplatePath(variant);
				variantScriban.Tag = variant.Fields[Templates.VariantScriban.Fields.Tag].GetEnumValue();
				variantScriban.Template = variant[Templates.VariantScriban.Fields.Template];
				variantScriban.CssClass = variant[Templates.VariantScriban.Fields.CssClass];
				variantScriban.ChildItems = variant.Children.Count > 0 ? _variantFieldParser.ParseVariantFields(variant, variant.Parent, false) : new List<BaseVariantField>();

				RenderVariantFieldArgs variantFieldArgs = new RenderVariantFieldArgs()
				{
					VariantField = variantScriban,
					Item = item,
					HtmlHelper = args.HtmlHelper,
					IsControlEditable = args.IsControlEditable,
					IsFromComposite = args.IsFromComposite,
					RenderingWebEditingParams = args.RenderingWebEditingParams,
					RendererMode = args.RendererMode,
					Model = args.Model
				};
				this.PipelineManager.Run("renderVariantField", variantFieldArgs);
				if (!string.IsNullOrEmpty(variantFieldArgs.Result))
					return variantFieldArgs.Result;
				if (variantFieldArgs.ResultControl != null)
					return this.RenderControl(variantFieldArgs.ResultControl);
				
				return string.Empty;
			}));
	}
		protected new virtual void AddRuntimeScriptObjects(TemplateContext templateContext, VariantScriban variantTemplate, RenderVariantFieldArgs args)
		{
			ScriptObject scriptObject = new ScriptObject();
			scriptObject.Add("i_item", args.Item);
			scriptObject.Add("o_model", args.Model);
			scriptObject.Import("sc_placeholder", (Delegate) new RenderPlaceholder(this.RenderPlaceholderImpl));
			if (args.Parameters != null && args.Parameters.ContainsKey("geospatial"))
				scriptObject.Add("o_geospatial", args.Parameters["geospatial"]);
			this.AddChildExecutionFunction(variantTemplate, args, scriptObject);
			this.AddChildEvaluationFunction(variantTemplate, args, scriptObject);
			this.AddDelegateToFunction(args, scriptObject);
			templateContext.PushGlobal(scriptObject);
		}

		private string GetTemplatePath(Item variantItem)
		{
			if (variantItem.Parent == null || variantItem.InheritsFrom(Sitecore.XA.Foundation.Variants.Abstractions.Templates.VariantsGrouping.ID))
				return string.Empty;
			return this.GetTemplatePath(variantItem.Parent) + "/" + variantItem.Name;
		}
		private delegate string RenderPlaceholder(string placeholderKey, Item item = null);
		private delegate string DelegateToDeligate(Item contextItem, Item variantItem);
	}
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <renderVariantField>
        <processor patch:instead="*[@type='Sitecore.XA.Foundation.Scriban.Pipelines.RenderVariantField.RenderScriban, Sitecore.XA.Foundation.Scriban']" type="[YOUR NAMESPACE].RuntimeScriptObjects, [YOUR ASSEMBLY]" resolve="true" />
      </renderVariantField>
    </pipelines>
  </sitecore>
</configuration>

The above code is mostly a copy of the RenderScriban type in Sitecore.XA.Foundation.Scriban with one notable exception. the DelegateTo functionality has been added. This allows Scriban objects to delegate rendering to other Scriban objects. For example you could have a common recurring snippet of Scriban that no longer needs to be copy/pasted. However this is less about the specific use case and more about the technique of enabling scriban to render other scriban. Hopefully you can think of some splendidly clever ways to make this work for you.

Sitecore Docker Debugging

Image result for dead docker

Having spent the last half year or so diving into docker there’re a few things that I’ve found helpful to understand. Docker can be a lot to wrap your head around if you’re new to containerization, so here are a few of the helpful tips and tricks I came across to help you with Sitecore’s starter kit for Docker(Note, these were findings for sitecore 10.0.0 and 10.0.1).

Unexplained extreme slowness.

Some of the members of my team encountered an issue where Sitecore seemed to be crawling on every request making every aspect of Sitecore extremely slow. Upon further review it was discovered that almost every Sitecore request took an extra 11 seconds almost exactly 11 seconds. The most frustrating part about this is that there was no clues in the logs. It wasn’t until I started systematically removing HttpRequest pipeline providers that I discovered the offending pipeline. The HttpRequest pipeline makes an http request out for some reason or another, disabling this pipeline fixed the issue:

<?xml version="1.0"?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
  <sitecore>
    <pipelines>
      <httpRequestBegin>
        <processor type="Sitecore.Pipelines.HttpRequest.EnsureServerUrl, Sitecore.Kernel" >
          <patch:delete />
        </processor>
      </httpRequestBegin>
    </pipelines>
  </sitecore>
</configuration>

IMPORTANT NOTE! The actual issue was the following:

Unable to attach debugger

A few of my developers encountered an issue where they couldn’t attach to a container. After digging in and analyzing logs from visual studio it lead me to execute this simple test in powershell using .

docker exec -it <docker container name> powershell
iwr https://www.google.com -UseBasicParsing

the above resulted in unresolved host

ping 8.8.8.8

the above resulted in a successful ping output

By these two results combined we could determine that the DNS server wasn’t operating properly for the containers. This specifically impacts attaching a debugger because the first step of attaching a debugger is VS executes docker commands to dynamically download the remote debugging tools into the container using the host name. No DNS, no remote debugging tools, process fails.

  cm:
    dns:
      - 8.8.8.8
      - 4.4.4.4
    isolation: ${ISOLATION}
    image: ${SITECORE_DOCKER_REGISTRY}sitecore-xp0-cm:${SITECORE_VERSION}
    depends_on:
      id:
        condition: service_started
      xconnect:
        condition: service_started
    environment:

By specifically applying the google DNS server IP this problem was resolved. As a bonus, this also fixed the prior issue.

Memory capped on CM

This one can be sneaky because by default the containers can use up to 1 gig of memory, Sitecore will generally run fine in a container using 1 gig of memory. However, we can do better by designating a higher memory amount if your system has it available. Note that CM, CD, SQL particularly benefit from this adjustment.

  cm:
    ...
    mem_limit: 3GB

In Process isolation

By far the most significant performance boost i could find was to run the Sitecore containers in process. However this can be tricky if you’re not rolling windows server edition. Those of us with Windows 10 Pro need to jump through a few hoops.

  1. Find what windows version you’re running by executing this powershell command
winver

2. Set up your docker-compose file to use the same images as your windows version. In the starter kit this is done in your .env file

COMPOSE_PROJECT_NAME=sitecore-xp0
SITECORE_DOCKER_REGISTRY=scr.sitecore.com/sxp/
SITECORE_VERSION=10.0.1-ltsc2019
SITECORE_ADMIN_PASSWORD=b
SQL_SA_PASSWORD=MyC0m9l&xP@ssw0rd

Adjust the highlighted text to match your windows version

Winver command confirm Windows 10 version 2004
COMPOSE_PROJECT_NAME=sitecore-xp0
SITECORE_DOCKER_REGISTRY=scr.sitecore.com/sxp/
SITECORE_VERSION=10.0.1-2004

SITECORE_ADMIN_PASSWORD=b
SQL_SA_PASSWORD=MyC0m9l&xP@ssw0rd

3. Adjust the process isolation at the bottom of your .env

TRAEFIK_IMAGE=traefik:v2.3.5-windowsservercore-1809
TRAEFIK_ISOLATION=hyperv
ISOLATION=default
TRAEFIK_IMAGE=traefik:v2.3.5-windowsservercore-1809
TRAEFIK_ISOLATION=hyperv
ISOLATION=process

Now when you start your docker-compose environment it’ll download new images and you’ll be rewarded by environments that use about 40% less memory and 40% more performance. A good situation if you can get it. Note that Sitecore doesn’t have images for every modern windows version, for example if you’re rolling 1909 you won’t be able to run process isolation.

Sidekick Migration Presets

What is it

If you find yourself regularly multi-selecting the same several nodes while performing a Sidekick content migrator Sync then this update is for you. With defined presets you can configure and fully control a convenient single button that will configure and execute a sync for you.

How to get it

This is added in Sidekick 1.5.6

How to configure it

The configuration happens in your .local configuration as a sibling element to the servers node.
presets
here is a gist of the applicable xml snippet
xmlsnippet

  • Here you can see the entire set of default control options. If your operation uses these options you may omit them to be defaulted to this parameter value. For example omitting overwrite=”true” from your xml node attributes will still default to true
  • A list of serverBlacklist nodes can be used to deny this preset operation on a particular environment. If you didn’t want a full content load to come from Dev for instance this might be useful
  • A list of serverWhitelist nodes can be defined to only allow this operation on a particular server and no others
  • Note: black and white list servers should match the server URLs defined in the servers xml node
  • a list of GUID based sources are defined to set a list of root pull loations

Unicorn Configuration Validation

unicornn
Unicorn is a great platform for tracking and propagating changes between environments. There is an incredible amount of flexibility in the configurations which is a great thing, however it’s really easy to mess up in an inconspicuous ways.

The most common ways I’ve seen that things get messed up are:

  • Dependencies are inaccurately defined
  • Paths tracked aren’t rooted in stock Sitecore items
  • Gaps in the tracked items exist (i.e. an untracked parent and tracked grandparent)
  • Typos

Unicorn is incredibly forgiving as far as handling these errors. There is an internal process that attempts to automatically apply dependencies implicitly that will most likely do the job for you. Often times these errors will not show up until someone tries to apply the configurations to an empty Sitecore install. This is definately not a situation you want to find yourself in while rebuilding a production environment in an emergency. It’s in your best interest to make sure that you’re building your tracked tree appropriately before it comes to that.

What can be done?

I propose a scripting solution. Sounds complicated? Yes it certainly was a challenging problem. When looking at unicorn configurations there’s a surprising amount of complexity that you need to take into.

  • Abstract configurations
  • Includes
  • Excludes
  • Excepts
  • Wildcards

It doesn’t take long for the configurations to become incredibly complex to manage.

Enter powershell

I’ve created a two step process to handle analyzing and reporting.

Step 1, building a model

The first step is to construct a trie style data structure to simulate a desired state of a unicorn configuration.
The script simply requires the root path which contains all your configuration files, it’ll automatically find all the Unicorn configurations wherever they are.
$tst = Get-UnicornModel.ps1 -Path “C:\Code\MyProject\ProjectRoot”

Gist for model building script found here

Powershellstructure
You can see here is a powershell object representation of /sitecore/layout/layouts
From here you can see some basic data about the desired state of your Sitecore environment.

  • Config – A list of configurations that track this item (it needs to be a list because multiple unicorn configs CAN track a single item
  • Node – Name of the node
  • Path – Path of the node
  • Parent – Object representation of parent node in the same format as this one
  • Database – Database the item is located in
  • Next – List of objects that represent the tracked children of this node

Important to note that this is in no way parsed from the yml files but rather implied by parsing configurations. Think of it as a desired state of your unicorn setup.

Step 2, analyzing the model

Once you have all this information at hand, it’s relatively easy to parse it looking for oddities. As it sits i am currently tracking three things.
Invoke-UnicornAssessment.ps1 -Trie $model -ErrorOnOrphans
Notice there are two switches to control the behavior of this script

  • -ErrorOnOrphans : Script throws an error when there are orphans detected
  • -ErrorOnDependencyMismatch : Script throws an error when the dependencies aren’t properly explicitly set

This is to control from your devops pipelines if you want to have this be a warning or a full stop error that halts progress.

Gist for analyzing script found here

  • Does a configuration have an unspecified dependency on another configuration?
  • Are there orphans? (item tracked, parent NOT tracked, grandparent tracked)
  • Root not part of default Sitecore

reportingerrors
Rootnodes
Using this model, the analysis can easily be expanded to track anything that you desire. For example, maybe you want to make sure that your developers aren’t tracking items under the home node, this can be easily implemented using this model

The end result

Now you can have confidence that your Unicorn configurations won’t have cronic problems as well as being able to finely control how Unicorn configurations are managed. Anyone, no matter your experience level with Unicorn can make mistakes setting this stuff up, now you can rest a little more easily :).

Sitecore Package Autoloader

 

Clients i have always ask for a method of associating content with a particular release.  Generally speaking we’d go with a Unicorn deploy once configuration, Sitecore Sidekick (which obviously i’m quite partial to), or a manual package install post-build step.  All of these methods have their strength and drawbacks.  The main drawback with these methods is controlling when items should be added.  For example if you want content to be added once and then never added again regardless it’s a hassle for Unicorn because you’ll need to remove the configuration tracking those items and it’s a hassle for Sidekick for the same reason, you’ll need to remove the scripting kicking off the content transfer for subsequent releases. Additionally, both these methods involve quite a bit of configuration scripting to both your solution and your build/release pipelines.

Automating packages

openbox

My proposal to fix this come in the form of automating package installation with a descriptor for when the package should be installed.  Based on a few techniques I’ve used in the past that have been pretty effective to make this process as simple and seamless as possible.

This is facilitated through a nuget package found here.

  1. No configuration needed (beyond the base configuration needed for the module).
  2. No additions to your build/release pipeline.
  3. Full control over when/if a package is installed.
  4. Full control over dependencies if you have packages that require other packages to be installed first
  5. Full control over the type of package install method being used

Using packages from an embedded resource

This method minimizes the effort needed to facilitate an automatic package deployment.  Note that this method will incur a memory cost based on the size of the package so be careful of your package sizes when using this method.

  1. The package is embedded in a dll file.
    1. This is nice because every build/release process ever handles dlls easily
  2. A descriptor (c# poco: PackageAutoloaderDescriptor) controls when a package should be applied
  3. The package is installed as part of the initialize pipeline so you’re guaranteed the package content is installed before Sitecore is usable

A simple Example

	public class DemoDescriptor : PackageAutoloaderDescriptor
	{
		public void Process(PipelineArgs args)
		{
		}
		public override string PackageNamespace => "PackageAutoloaderDemo.demo.zip";
		public override List Requirements => new List()
		{
			new DescriptorItemRequirements()
			{
				Database = "master",
				ItemId = new ID("{76036F5E-CBCE-46D1-AF0A-4143F9B557AA}")
			}
		};


	}

Using packages from the filesystem

Thanks to Robin Hermanussen for the comment suggesting to add this feature.

This allows automatically installing packages in the same way as above except instead of referencing a namespace for an embedded resource package you can have it install a package from a filepath.  This saves on memory consumption and allows us to install much larger files without worry.  Note that this method will need to have a build/release method of delivering the Sitecore package to your server

  1. The package is delivered to the server
    1. This can be located anywhere, not simply the App_Data/packages folder that standard Sitecore uses
  2. A descriptor (c# poco: PackageFileloaderDescriptor) controls when a package should be applied
  3. The package is installed as part of the initialize pipeline so you’re guaranteed the package content is installed before Sitecore is usable

A simple example

	public class DemoDescriptor2 : PackageFileLoaderDescriptor
	{
		public override IItemInstallerEvents ItemInstallerEvents => 
			new DefaultItemInstallerEvents(new BehaviourOptions(InstallMode.Overwrite, MergeMode.Undefined));

		public override List Requirements => new List()
		{
			new DescriptorItemRequirements()
			{
				Database = "master",
				ItemId = new ID("{190B1C84-F1BE-47ED-AA41-F42193D9C8FC}")
			}
		};

		public override string RelativeFilePath => "/PackageAutoloader/demo2.zip";
	}

Usage instructions

You can read the documentation on setting up Package Autoloader here

SXA Advanced Dictionary

Download the helix foundation project HERE

How do you handle basic content snippets?

google-stop-words
People generally have strong opinions on where simple phrases or single words should be stored in order to properly localize them. This is normally 2 camps.

Store simple content in standard values

One camp that stores simple phrases or single words in the standard values of the templates, this allows for more flexibility on a case by case basis but makes it hard to change them wholesale.

Using stock Dictionaries

The second camp is to use the dictionary, but that comes with it’s own problems, particularly for keeping helix pure and having a component own it’s own Sitecore Items, additionally in SXA you need to worry about utilizing components across different sites and tenants that own a completely different dictionary location.

Either way is not very good

both of these options come with pretty serious problems that place significant tech debt style nastiness on the content authors in terms of flexibility

Enter the AutoDictionary

robot

Automatically creates your dictionary items if they don’t exist.

Allows authors to optionally edit the dictionary items from the EE.

SXA site component sharing automatically handled.

Traditionally a dictionary key will be pathed using .‘s like so:

Carousel.Labels.Next

This would look for the dictionary definition with that key. Traditionally it would be located at the path:

Dictionary/Carousel/Labels/Next

Using this information we know where the dictionary definition SHOULD be.
With the addition of a default text block we can create these dictionary items automatically. whereas a traditional dictionary would output nothing.

<span class="btn">@Html.AutoTranslate("Carousel.Labels.Next", "Next")</span>

The below would be EE authorable

<span class="btn">@Html.AutoTranslate("Carousel.Labels.Next", "Next", true)</span>

Sitecore Analytics Errors

ERROR [Experience Analytics]: System.Net.WebException: The remote name could not be resolved: 'reportingserviceurl'
   at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
   at System.Net.HttpWebRequest.GetRequestStream()
   at Sitecore.Xdb.Reporting.Datasources.Remote.RemoteReportDataSourceProxy.GetData(ReportDataQuery query)
   at Sitecore.Xdb.Reporting.ReportDataProvider.ExecuteQueryWithCache(ReportDataQuery query, ReportDataSource dataSource, CachingPolicy cachingPolicy)
   at Sitecore.Xdb.Reporting.ReportDataProvider.GetData(String dataSourceName, ReportDataQuery query, CachingPolicy cachingPolicy)
   at Sitecore.ExperienceAnalytics.Core.Repositories.SiteRemoteReader.GetEntities(String sqlQuery)
   at Sitecore.ExperienceAnalytics.Core.Repositories.SiteRemoteReader.GetAll(NameValueCollection readingPreferences)
   at Sitecore.ExperienceAnalytics.Core.Repositories.CachedReaderDecorator`2.GetAll(NameValueCollection readingPreferences)
   at Sitecore.ExperienceAnalytics.Core.Repositories.SiteFilter.FilterReaderDecorator`2.GetAll(NameValueCollection readingPreferences)
   at Sitecore.ExperienceAnalytics.Client.RenderingHelper.GetSiteComboBoxItems()

if you’re getting this error message, it’s likely that your configurations are missing the URL to the reporting service.

On the CM server modify the configuration file at:
\wwwroot\App_Config\Sitecore\Azure\Sitecore.Xdb.Remote.Client.CM.config

notice that there are 2x spots for URLs. If those locations have a dummy placeholder URLs then something went awry with the original setup. Instead replace the placeholder urls with your rep and prc service urls.

Azure Search Missing Target Dropdown

Missing options in the target dropdown for the general link’s internal link form? The options are sourced by the search index for some reason.

check if a simple reindex of your core index will do the trick

If you’ve already tried that and still no dice, you may run into the same issue i did. After going to Sitecore Support i got a few good pieces of information.

  1. In order to use Azure search in Sitecore you need to limit the fields indexed by Sitecore. Typically done with <indexAllFields>false</indexAllFields>
  2. There are some fields required by SPEAK to make these forms work properly

The Solution

There are a few templates and fields that need to be available for this functionality to work properly. Make sure your solution has these standard configuration nodes set up.

<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/" xmlns:role="http://www.sitecore.net/xmlconfig/role/" xmlns:search="http://www.sitecore.net/xmlconfig/search/">
  <sitecore role:require="ContentManagement or ContentDelivery" search:require="azure">
    <contentSearch>
      <indexConfigurations>
        <defaultCloudIndexConfiguration>
          <documentOptions>
            <include hint="list:AddIncludedTemplate">
              <StandardTemplate>{1930BBEB-7805-471A-A3BE-4858AC7CF696}</StandardTemplate>
              <CommonText>{76F63DF7-0235-4164-86AB-84B5EC48CB2A}</CommonText>
            </include>
            <include hint="list:AddIncludedField">
              <fieldId>{8CDC337E-A112-42FB-BBB4-4143751E123F}</fieldId>
              <hidden>{39C4902E-9960-4469-AEEF-E878E9C8218F}</hidden>
            </include>
          </documentOptions>
        </defaultCloudIndexConfiguration>
      </indexConfigurations>
    </contentSearch>
  </sitecore>
</configuration>

Azure Search replication

If you’re trying to get a geo-replicated disaster recovery site set up and you’re using Azure Search you likely ran into the same issue that i did. Azure Search simply does not have the geo-replication tools or abilities that SQL does. This becomes all the more frustrating by the fact that it’s literally the only PAAS element in the Sitecore ecosystem that doesn’t have this functionality. If you don’t have the luxury of being able to re-index your data rapidly, you’re stuck waiting for the data to index. In the context of Sitecore this can take several hours on particularly large sites.

Additionally this can be problematic when dealing with Blue/Green deployments as customer facing content could and should be included in your search index. This problem can be solved in a similar fashion. When added to this method of zero downtime deployments it can give a more complete and safe deployment.

Using a Azure Search Index as a source

Any data processing you needed to do to populate your primary index can be skipped if you simply utilize one main azure search index as the source for the second. I have this brokered through an Azure Function.
AzureFunctionReplicationFlow

The code for the function is as follows:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;


namespace BendingSitecore.Function
{
    public static class AzureSearchReplicate
    {
        [FunctionName("AzureSearchReplicate")]
        public static async Task Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
            ILogger log)
        {
	        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            dynamic data = JsonConvert.DeserializeObject(requestBody);

			IEnumerable indexes = Enumerable.Empty();
			if (data.indexes != null){
				indexes = ((JArray)data.indexes).Select(x => (string)x);
			} 
	        try
	        {
		        Start(new SearchServiceClient(data.source.ToString(), new SearchCredentials(data.sourceKey.ToString()))
			        , new SearchServiceClient(data.destination.ToString(), new SearchCredentials(data.destinationKey.ToString())),false, log, indexes ?? Enumerable.Empty());
			}
	        catch (Exception e)
	        {
				log.LogError(null, e, "An Error occurred");
		        return new BadRequestObjectResult("Require a json object with source, destination and keys.");
			}

	        return  new OkObjectResult($"Azure Search replication is running, should be finished in about 10 minutes.");
        }
		public static void Start(SearchServiceClient source, SearchServiceClient destination, bool wait,
            ILogger log, IEnumerable indexes)
		{
			List tasks = new List();
			ClearAllIndexes(destination, indexes);
			foreach (var index in source.Indexes.List().Indexes.Where(x => !indexes.Any() || indexes.Any(i => i.StartsWith(x.Name))))
			{
				tasks.Add(Task.Run(async () =>
				{
					try
					{
						destination.Indexes.Get(index.Name);
					}
					catch (Exception e)
					{
						log.LogInformation($"creating index {index.Name}", null);
						destination.Indexes.Create(index);
						await Task.Delay(5000);
					}
					await MigrateData(source.Indexes.GetClient(index.Name),
						destination.Indexes.GetClient(index.Name), log);
				}));
			}
			if (wait)
			{
				foreach (var task in tasks)
				{
					task.Wait();
				}
			}
		}

		public static void ClearAllIndexes(SearchServiceClient client, IEnumerable indexes)
		{
			foreach (var index in client.Indexes.List().Indexes.Where(x => !indexes.Any() || indexes.Any(i => i.StartsWith(x.Name))))
			{
				client.Indexes.Delete(index.Name);
			}
		}

		public static async Task MigrateData(ISearchIndexClient source, ISearchIndexClient destination,
            ILogger log)
		{
			log.LogInformation($"Starting migration of data for {source.IndexName}", null);
			SearchContinuationToken token = null;
			var searchParameters = new SearchParameters { Top = int.MaxValue };
			int retryCount = 0;
			while (true)
			{
				DocumentSearchResult results;
				if (token == null)
				{
					results = await source.Documents.SearchAsync("*", searchParameters);
				}
				else
				{
					results = await source.Documents.ContinueSearchAsync(token);
				}
				try
				{
					await destination.Documents.IndexAsync(IndexBatch.New(GetAction(destination, results)));
				}
				catch (Exception e)
				{
					log.LogError(e, "Error occurred writing to destination", null);
					log.LogInformation("Retrying...", null);
					retryCount++;
					if (retryCount > 10){
						log.LogError("Giving up...", null);
						break;
					}
					continue;
				}
				if (results.ContinuationToken != null)
				{
					token = results.ContinuationToken;
					continue;
				}

				break;
			}
			log.LogInformation($"Finished migration data for {source.IndexName}", null);
		}

		public static IEnumerable<IndexAction> GetAction(ISearchIndexClient client, DocumentSearchResult documents)
		{
			return documents.Results.Select(doc => IndexAction.MergeOrUpload(doc.Document));
		}
    }
}

Additionally make sure your Azure function has these configuration settings.


        AzureWebJobDashboard                     = "DefaultEndpointsProtocol=https;AccountName=$storageName;AccountKey=$accountKey"
        AzureWebJobsStorage                      = "DefaultEndpointsProtocol=https;AccountName=$storageName;AccountKey=$accountKey"
        FUNCTIONS_EXTENSION_VERSION              = "~2"
        FUNCTIONS_WORKER_RUNTIME                 = "dotnet"
        WEBSITE_NODE_DEFAULT_VERSION             = "8.11.1"
        WEBSITE_RUN_FROM_PACKAGE                 = "1"
        WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = "DefaultEndpointsProtocol=https;AccountName=$storageName;AccountKey=$accountKey"
        WEBSITE_CONTENTSHARE                     = "$storageName"
        AzureWebJobsSecretStorageType            = "Files"

Running your function

Execute the function code using a raw json request body like so:

{
    "destination":  "[standby azure search name]",
    "destinationKey":  "[standby azure search key]",
    "source":  "[primary azure search name]",    
    "sourceKey":  "[primary azure search key]",
    "indexes":  null
}

Note: If you want to specify a particular index to manage you may pass in a json array of indexes to manage. It will only clear/refresh the indexes specified or if null it will clear/refresh all indexes

Automating

Through powershell there are ways to create and execute the Azure function given a valid Azure context and some desired names and resource groups. Expect to see a blog post on that shortly in the future.