Sharing Header/Footer across platforms

I had a requirement that a site i was building was required to have the headers and footers sourced from a site owned by the parent company on a separate platform.  Sounds a bit insane but doable.

Make certain your site is extremely clean for JS and CSS

The first thing you’re going to want to verify is that you have nothing targeting general elements.  For example all your styling should be done by very specific class targeting.  Something like “my-secret-class” is great whereas “form” not so much.  Even worse would be to style root level elements such as assigning styling to the li element.

In short, don’t use any JS/CSS that could interfere with things coming from their other domain.

Scrape and cache

Next you’ll want to scrape the source site and parse out their header/footer and all CSS/JS using HtmlAgilityPack

		private readonly Dictionary<string, string> _referrerHeaders = new Dictionary<string, string>();
		private readonly Dictionary<string, string> _referrerFooter = new Dictionary<string, string>();
		private readonly object _refreshLocker = new object();

		public virtual string GetHeader()
		{
			lock (_refreshLocker)
			{
				ValidateUrl(GetOriginModel()?.ReturnUrl);
				string ret;
				_referrerHeaders.TryGetValue(GetOriginModel()?.ReturnUrl ?? "", out ret);
				return ret ?? "";
			}
		}
		public virtual string GetFooter()
		{
			lock (_refreshLocker)
			{
				ValidateUrl(GetOriginModel()?.ReturnUrl);
				string ret;
				_referrerFooter.TryGetValue(GetOriginModel()?.ReturnUrl ?? "", out ret);
				return ret ?? "";
			}
		}

		public virtual void ValidateUrl(string url)
		{
			if (string.IsNullOrWhiteSpace(url) || url.StartsWith("/"))
				return;
			if (!_referrerHeaders.ContainsKey(url))
			{
				HtmlDocument doc = new HtmlDocument();
				using (WebClient wc = new WebClient())
				{
					wc.Encoding = Encoding.UTF8;
					doc.LoadHtml(wc.DownloadString(url));
				}
				_referrerHeaders[url] = GenerateHeader(url, doc);
				_referrerFooter[url] = GenerateFooter(doc);
			}
		}
		public virtual string GenerateFooter(HtmlDocument doc)
		{
			return GetNodesByAttribute(doc, "class", "site-footer").FirstOrDefault()?.OuterHtml;
		}

		public virtual string GenerateHeader(string url, HtmlDocument doc)
		{
			Uri uri = new Uri(url);
			string markup =  GetNodesByAttribute(doc, "class", "site-header").FirstOrDefault()?.OuterHtml.Replace("action=\"/", $"action=\"https://{uri.Host}/");
			string svg = GetNodesByAttribute(doc, "class", "svg-legend").FirstOrDefault()?.OuterHtml;
			string stylesheets =
				GetNodesByAttribute(doc, "rel", "stylesheet")
					.Aggregate(new StringBuilder(), (tags, cur) => tags.Append(cur.OuterHtml.Replace("href=\"/bundles", $"href=\"https://{uri.Host}/bundles")))
					.ToString();
			string javascripts =
				doc.DocumentNode.SelectNodes("//script")
					.Aggregate(new StringBuilder(), (tags, cur) =>
					{
						if (cur.OuterHtml.Contains("gtm.js"))
							return tags;
					  return tags.Append(cur.OuterHtml.Replace("src=\"/bundles", $"src=\"https://{uri.Host}/bundles"));

					})
					.ToString();

			return $"{svg}{stylesheets}{markup}{javascripts}";
		}

		public virtual HtmlNodeCollection GetNodesByAttribute(HtmlDocument doc, string attribute, string value)
		{
			return doc.DocumentNode.SelectNodes($"//*[contains(@{attribute},'{value}')]");
		}

NOTE: You’ll likely need to heavily customize your GenerateHeader and GenerateFooter methods.

Lets break this down a bit as it’s a bit hard to follow.

  1. You pass in a URL that you want to source your headers and footers from
  2. Checks the cache to see if we already have that header/footer
  3. Using a WebClient it scrapes the markup off the source page
  4. Using whatever means we can we identify where the markup comes from for the header and footer, in this case it’s identifiable from a class of “site-footer” and “site-header” which makes it easier
  5. We make sure we turn any relative links into absolute links, since relative won’t work anymore since the thing is operating on a separate domain
  6. We grab their SVG sprite definition, we’ll need that or their icons will be blank
  7. Grab all stylesheets and scripts making sure to trip out the things that don’t make sense on a case by case basis like the the other domains tracking libraries
  8. Store this information in the cache

Make sure you periodically clear the caches to pick up changes from the source.  I did this simply like this

		public SiteComponentShareService()
		{
			Timer t = new Timer(600 * 1000);
			t.Elapsed += (sender, args) =>
			{
				lock (_refreshLocker)
				{
					_referrerHeaders.Clear();
					_referrerFooter.Clear();
				}
			};
			t.Start();
		}

 This clears the cache objects every 10 minutes with thread lockers to make sure it doesn’t clear the cache as something is trying to use it.

Finishing Touches

The acquired header and footer may have fancy XHR needs that need to be accounted for. Very likely for this you’ll need to proxy requests. For example i needed to catch search suggestions and pass it through to their servers endpoint for hawksearch

		[Route("hawksearch/proxyautosuggest/{target}")]
		public ActionResult RerouteAutosuggest(string target)
		{
			WebClient wc = new WebClient();
			string ret = wc.DownloadString(
				$"https://www.parentsitewherewefoundtheheaders.org/hawksearch/proxyAutoSuggest/{target}?{Request.QueryString}");
			return Content(ret);

		}

As you can see, we’re simply catching it and passing it along to their domain’s endpoint. Since we have their same javascript code and their same headers this simple pass-through allows us to seem like we have the exact same header.