Warning: SetCookie changes implementation in SharePoint 2013

I was recently working on verifying some of my custom code worked in SharePoint 2013.  I had some older code where I needed to set cookies and use them to remember user preferences.  I was debugging my code and I noticed that all of my cookies were coming back either true or false instead of the value I put into them.  I had been using a SharePoint JavaScript method called SetCookie.  Below is the implementation that has existed in SharePoint for the previous 3 versions (2003, 2007, and 2010).

function SetCookie(name, value, path)
{
	document.cookie=name+"="+value+";path="+path;
}

But for some reason Microsoft decided to change the implementation of this method in SharePoint 2013 to the following:

function SetCookie(sName, value) {
    SetCookieEx(sName, value, false, window);
}
function SetCookieEx(sName, value, isGlobal, wnd) {
    var c = sName + (value ? "=true" : "=false");
    var p = isGlobal ? ";path=/" : "";

    wnd.document.cookie = c + p;
}

Notice this new method tries to evaluate the value into a boolean and then forcefully sets the cookie to true or false.  So if you were storing anything other than boolean values in your cookies and used this method in previous versions of SharePoint, you will now need to update your code.  For this you have two options:

Option 1:  I found there is another method in SharePoint 2013 which still implements the SetCookie the same was as previous versions of SharePoint.

function SetMtgCookie(cookieName, value, path) {
    document.cookie = cookieName + "=" + value + ";path=" + path;

}

So you can easily do a search and replace for SetCookie to be replaced with SetMtgCookie and you’ll be good to go.

Option 2:  This is the option I opted for.  I decided that as tempting as doing a quick search and replace like I suggested in option 1 sounded, I wanted to remove my dependency on SharePoint’s implementation in case it changes again in the future.  Since the older SetCookie is only one line, I just replaced my method call to that line:

document.cookie=name+"="+value+";path="+path;

It was a pretty simple change, only slightly more time taken than option 1 and I removed a dependency.

Anyway, I hope this helps someone else out when troubleshooting code migration to SharePoint 2013.

Repopulating the Newsfeed Cache after a Server Restart

In SharePoint 2013, the Newsfeed relies on data cached in the distributed cache service which behind the scenes is using the appfabric cluster service. Newsfeed data is lost when you restart a server in your farm running this service without doing a graceful shutdown (http://technet.microsoft.com/en-us/library/jj219613.aspx#graceful).

Stop-SPDistributedCacheServiceInstance -Graceful
Remove-SPDistributedCacheServiceInstance

Sometimes you have to restart all servers in the farm and then your newsfeed will be empty.  There is a timer job that runs every 5 minutes called “Feed Cache Repopulation Job” which according to this website http://technet.microsoft.com/en-us/library/jj219560.aspx is supposed to autopopulate the newsfeed cache from the content stored in SharePoint.  Our SharePoint 2013 farm is on the March 2013 PU and this job did not seem to be repopulating the cache.

The article seemed to imply you could run some powershell scripts as well to accomplish the same thing.  I tried these:

Update-SPRepopulateMicroblogLMTCache
Update-SPRepopulateMicroblogFeedCache

The parameter for the first one was easy, just pass in your UPA proxy.  The second one also needed this proxy but it could also include an account name or a site url (http://technet.microsoft.com/en-us/library/jj219749.aspx).  The wording states that when using the account name use the “user account name for the user profile service application”.  I took this to mean the UPA service account.  I tried that and even after waiting several hours, there still wasn’t any repopulation.  So I tried the site url option passing in the mysite host url.  Still nothing.

I finally figured out after using reflector on the source code that the account name it was expecting was an account of a user to repopulate THAT user’s information.  I updated my script to the code below to run the Update-SPRepopulateMicroblogFeedCache for EACH user in the UPA and my newfeed cache started coming back to life!

$proxy  = Get-SPServiceApplicationProxy | ? {$_.Name -eq "MySite User Profile Service"}
Update-SPRepopulateMicroblogLMTCache -ProfileServiceApplicationProxy $proxy

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Server")
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Office.Server.UserProfiles")

$url = "https://mysite.company.com"
$contextWeb = New-Object Microsoft.SharePoint.SPSite($url);
$ServerContext = [Microsoft.Office.Server.ServerContext]::GetContext($contextWeb);

$UserProfileManager = New-Object Microsoft.Office.Server.UserProfiles.UserProfileManager($ServerContext);

$Profiles = $UserProfileManager.GetEnumerator();

foreach ($oUser in $Profiles ) {
	if ($oUser.item("SPS-PersonalSiteCapabilities").Value -eq 14 ){
		$personalurl = $url + $oUser.item("personalspace").Value
		Write-Host $oUser.item("AccountName").Value
		Update-SPRepopulateMicroblogFeedCache -ProfileServiceApplicationProxy $proxy -accountname $oUser.item("AccountName").Value 
		#-siteurl $personalurl
	}
}

$contextWeb.Dispose()

The first time through I got a couple of errors so I added the if statement to check for the personalsitecapabilities being equal to 14.  After that I got less errors but there still were a few.  That’s when I tried going the site url route.  I was thinking if I send in the url of a user’s personal site, that it might work better.  I didn’t get any errors but it also didn’t repopulate the newsfeed for the users.  Oh well…

I now believe that the siteurl parameter is to repopulate the newsfeed cache for any sites that have the newsfeed on the homepage like the new SP 2013 team site template.  I know our environment doesn’t have any of these so I skipped this part.  I was thinking at some point I will need to figure this out though.  Hopefully it won’t involve looping through all sites in my farm but my gut says it will.  If someone else has figured out a good solution, please post the powershell code in the comments.  Thanks.

Update 3/29/2014:  

This issue has been resolved in Service Pack 1 (SP1).

Getting Active Directory UserId from Windows Claim in SharePoint 2013

We’ve always used NTLM for our SharePoint authentication but in SharePoint 2013, claims is the preferred authentication method.  Fortunately, SharePoint 2013 ships with something called Windows Claims.  This seems to work the same as the NTLM auth from before but that windows auth is converted into a claim that SharePoint can use.

This change means that your userid would look something like this:

i:0#.w|contoso\chris

instead of this:

contoso\chris

Sometimes when calling other services, you need the windows userid and not the claim userid.  So for these instances, I’ve created a few helper methods.

//Regex needs more testing
public const string CLAIMS_REGEX = @"(?<IdentityClaim>[ic])?:?0(?<ClaimType>.)(?<ClaimValueType>.)(?<AuthMode>[wstmrfc])(\|(?<OriginalIssuer>[^\|]*))?(\|(?<ClaimValue>.*))";
 
public static string GetAdUserIdForClaim(string login)
{
    string userName = login;
 
    foreach (Match m in Regex.Matches(login, CLAIMS_REGEX, RegexOptions.IgnoreCase))
	{
		try
		{
			if (m.Groups["AuthMode"].Captures[0].Value.ToLower() == "w")
			{
				userName = m.Groups["ClaimValue"].Captures[0].Value;
			}
		}
		catch { }
	}
    return userName;
}
 
public static string GetClaimForAdUserId(string login)
{
    string userName = login;
    SPClaimProviderManager mgr = SPClaimProviderManager.Local;
    if (mgr == null) return userName;
 
    SPClaim claim = new SPClaim(SPClaimTypes.UserLogonName, login, "http://www.w3.org/2001/XMLSchema#string", SPOriginalIssuers.Format(SPOriginalIssuerType.Windows));
    userName = mgr.EncodeClaim(claim);
 
    return userName;
}
 
public static bool IsLoginClaims(string login)
{
    Regex re = new Regex(CLAIMS_REGEX, RegexOptions.IgnoreCase);
    return re.IsMatch(login);
}

First I made a regular expression to identify the different pieces of a claim (see http://social.technet.microsoft.com/wiki/contents/articles/13921.sharepoint-2013-and-sharepoint-2010-claims-encoding.aspx).  This allows me to effectively parse the claim for the windows login name (see GetAdUserIdForClaim).  This also allows me to validate whether a string is a claim or not (see IsLoginClaims).

Update 01-22-2015:

After some more usage, I found that I was being too limiting in the Claim Types and Claim Value Types in my regex.  I had based the options from the technet article above but I then ran into some other Claim Types when doing some work recently that were not in that article.  I then found this page:  http://blogs.msdn.com/b/scicoria/archive/2011/06/30/identity-claims-encoding-for-sharepoint.aspx which listed a lot more than the technet article.  It also now seems that almost any value could be there in the future.  Because of this I changed the regex in the code above to allow any value in those two fields.

Gotcha’s using Custom Web Parts and the Minimal Download Strategy

I’ve been playing around with some of my custom code in SharePoint 2013.  One of the issues I’ve been noticing is when I add any of my custom web parts to a page, the minimal download strategy (MDS) would failover to the normal asp.net webforms page.  You can tell the difference by looking at the url.  A url that looks like this:

/_layouts/15/start.aspx#/SitePages/Home.aspx

is using MDS.  Notice the actual page is start.aspx and then the page it is ajax loading (MDS) is /sitepages/home.aspx (the part after the hash(#)).  Whereas a normal asp.net webforms page url would look like this:

/SitePages/Home.aspx

They both look the same but when using MDS you get an added benefit of less being downloaded on each click and also smoother and snappier page changes.

Gotcha #1 – Decorate your assembly or class with MDSCompliant(true).  If your MDS isn’t working and you see this message in your ULS:

MDSLog: MDSFailover: A control was discovered which does not comply with MDS

then you will need to add the attribute (http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.webcontrols.mdscompliantattribute.aspx).  Here is an example of adding it to the class:

[MdsCompliant(true)]
public class MyWebPart : WebPart
{
	//...Code
}

And here is an example of adding it to an assembly (assemblyinfo.cs):

[assembly: MdsCompliant(true)]

Every control that is loaded on the page needs this attribute.  So if you are loading other controls in your web part, you’ll need to make sure they also have this attribute.

Gotcha #2 – Declarative user controls typically used for delegate controls need a codebehind and the attribute set as well.  I use a lot of delegate controls and then use features to swap out user controls in the delegate controls to add / remove functionality on the site.  Typically my user controls didn’t have a codebehind and would just add webparts, html or other controls in the markup.  The issue is if you want these controls to be able to be swapped out using MDS, then you will need to add a codebehind to the user control and decorate it with the MdsCompliant attribute.

So a normal user control like this:

<%@ Control Language="C#" AutoEventWireup="true"  %>
<%@ Register TagPrefix="MyWebParts" Namespace="MyWebParts"%>

<MyWebParts:WebPart runat="server" Title="WebPart"></MyWebParts:WebPart>

would need to be converted to this:

<%@ Assembly Name="$SharePoint.Project.AssemblyFullName$" %>
<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="MyControl.ascx.cs" Inherits="MyControl" %>
<%@ Register TagPrefix="MyWebParts" Namespace="MyWebParts"%>

<MyWebParts:WebPart runat="server" Title="WebPart"></MyWebParts:WebPart>

and with the following codebehind:

[MdsCompliant(true)]
public partial class MyControl : UserControl
{
    protected void Page_Load(object sender, EventArgs e)
    {
    }
}

I couldn’t figure out a way to decorate the usercontrol without using a code behind.  If anyone else knows how to do this, please comment or contact me with the info.  Thanks!

Gotcha #3 – Inline scripts are not allowed and will need to be added using the SPPageContentManager.  If you receive any of the following messages in your ULS logs you will need to look at your content.

MDSFailover: document.write
MDSFailover: document.writeln
MDSFailover: Unexpected tag in head
MDSFailover: script in markup

The first two are obvious; you can’t have any document.write or document.writeln’s in your html.  The third is a little less obvious.  According to a MS source, in the head these tags are not allowed:

  • Meta refresh tag
  • link tag for a stylesheet (text/css)
  • script tag
  • style tag
  • title tag
  • base tag

The fourth was a big kicker for me.  I had made the decision a few years ago to switch almost all of my webparts over to being XSLT rendered.  That means I had a lot of inline javascript and css in my xslt.  Luckily, I had previously also created a special webpart which could ajax load other webparts using an update panel and had solved the inline script issue before.  When I was writing my special ajax loading webpart before I found this page http://www.codeproject.com/Articles/21962/AJAX-UpdatePanel-With-Inline-Client-Scripts-Parser which showed how to extend the ootb update panel to automatically find inline scripts and register them to work correctly using ajax. I was able to slightly modify this code to work for my XSLT rendered webparts.

public bool IsAnAjaxDeltaRequest
{
	get
	{
		return false == String.IsNullOrEmpty(Context.Request.QueryString["AjaxDelta"]);
	}
}

protected override void Render(HtmlTextWriter output)
{
	base.Render(output);
	string html = GetHtml();
	if (IsAnAjaxDeltaRequest)
	{
		html = RegisterAndRemoveInlineClientScripts(this, this.GetType(), html);
	}
	output.Write(html);
}

public static readonly Regex REGEX_CLIENTSCRIPTS = new Regex(
"<script\\s((?<aname>[-\\w]+)=[\"'](?<avalue>.*?)[\"']\\s?)*\\s*>(?<script>.*?)</script>",
RegexOptions.Singleline | RegexOptions.IgnoreCase | RegexOptions.Compiled |
RegexOptions.ExplicitCapture);

public static string RegisterAndRemoveInlineClientScripts(Control control, Type type, string htmlsource)
{
	if (htmlsource.IndexOf("<script", StringComparison.CurrentCultureIgnoreCase) > -1)
	{
		MatchCollection matches = REGEX_CLIENTSCRIPTS.Matches(htmlsource);
		if (matches.Count > 0)
		{
			for (int i = 0; i < matches.Count; i++)
			{
				string script = matches[i].Groups["script"].Value;
				string scriptID = script.GetHashCode().ToString();
				string scriptSrc = "";

				CaptureCollection aname = matches[i].Groups["aname"].Captures;
				CaptureCollection avalue = matches[i].Groups["avalue"].Captures;
				for (int u = 0; u < aname.Count; u++)
				{
					if (aname[u].Value.IndexOf("src",
						StringComparison.CurrentCultureIgnoreCase) == 0)
					{
						scriptSrc = avalue[u].Value;
						break;
					}
				}

				if (scriptSrc.Length > 0)
				{
					SPPageContentManager.RegisterClientScriptInclude(control,
						type, scriptID, scriptSrc);
				}
				else
				{
					SPPageContentManager.RegisterClientScriptBlock(control, type,
						scriptID, script);
				}

				htmlsource = htmlsource.Replace(matches[i].Value, "");
			}

		}
	}
	return htmlsource;
}

Since this code will automatically register any script references it finds, make sure that the paths to your scripts are correct, otherwise MDS will silently failover without any ULS messages.

Update 5/3/2013:

I have found another potential MDS error message in ULS:

MDSLog: Master page version mismatch occurred in MDS

or in the ajax response if using fiddler:

versionMismatch

I was able to resolve this by browsing to another site and then back to the original site with the issue.  Weird one, if anyone knows more about this error, please contact me or comment below.

BlobCache issues with time difference between SharePoint WFE and SQL

We recently ran into an interesting issue where when a user uploaded an image into SharePoint and then tried to view that image, they would receive an error.  For the rest of the day when viewing the image they would continue to get the error but the image would work fine for others.  If the user cleared their browser cache then the image would start working for them.  Also, after uploading an image, if the user waited for a few minutes before viewing the image it would work as expected.  The error the end user saw was “An Unexpected error has occurred” but by looking at the real error revealed the following:

Message: Specified argument was out of the range of valid values.
Parameter name: utcDate
Stack Trace:    at System.Web.HttpCachePolicy.UtcSetLastModified(DateTime utcDate)   at System.Web.HttpCachePolicy.SetLastModified(DateTime date)   at Microsoft.SharePoint.Publishing.BlobCache.<>c__DisplayClass42.<SendCachedFile>b__41()   at Microsoft.SharePoint.SPSecurity.<>c__DisplayClass4.<RunWithElevatedPrivileges>b__2()  at  Microsoft.SharePoint.Utilities.SecurityContext.RunAsProcess(CodeToRunElevated secureCode)   at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(WaitCallback secureCode, Object param)   at Microsoft.SharePoint.SPSecurity.RunWithElevatedPrivileges(CodeToRunElevated secureCode)   at Microsoft.SharePoint.Publishing.BlobCache.SendCachedFile(HttpContext context, BlobCacheEntry target, SPUserToken currentUserToken, SiteEntry currentSiteEntry)   at Microsoft.SharePoint.Publishing.BlobCache.HandleCachedFile(HttpContext context, BlobCacheEntry target, Boolean anonymousUser, SiteEntry currentSiteEntry)   at Microsoft.SharePoint.Publishing.BlobCache.RewriteUrl(Object sender, EventArgs e, Boolean preAuthenticate)   at Microsoft.SharePoint.Publishing.PublishingHttpModule.AuthorizeRequestHandler(Object sender, EventArgs ea)   at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()   at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

Looking at the error told me a few things.  One, this seemed to be time related and two, this seems to only affect files that are stored in the blob cache.  That also explained why clearing the browser cache worked to “fix” the issue for a user getting the error and why other users did not receive the error.  Because of the way that SharePoint’s BlobCache optimizes things, it will only send the image to the browser if the image has changed assuming the browser has a cached version.  Since the first time the user viewed the image generated an error, then every time after that the browser was just displaying a cached version or the error.  This was also apparent because the correlation id was the same GUID each time they viewed the image.  In SharePoint, each request gets it’s own guid which is the correlation id so they should never be the same between requests.

I spent some time reflectoring the code and the best I could determine was that it was sending the last modified date of the image in the response header and for some reason that date was in the future on the WFE’s where the blobcache was running and thus the error.  I at first thought that maybe the client’s time was causing this when uploading an image through explorer view but that didn’t seem to affect it.  To be honest, I was stumped for a little while.  Then that evening I was out for a run and it hit me, I bet the SQL box’s time was ahead of the WFE’s.

I tested this theory by viewing the time on the SQL box and the WFE and since SQL was ahead by over a minute I waited until they were at different minute values and uploaded a file.  The last modified time of the file showing in SharePoint was actually the time on the SQL server and not on the WFE.  At this instant the last modified time of the file was in the future on the WFE.  So it seems that the stored procedure that SP calls to add a document to SharePoint calls getutcdate() and thus uses the SQL server time.

I got our infrastructure guys involved and had them look into the time issues.  Once they got those resolved our image issues went away.  I know normally all computers in an active directory domain have the same time but in this instance, we are migrating to a new AD environment and our SQL boxes were on one domain and the WFE’s were on another.

Update 5/3/2013

After talking with our infrastructure team, it turns out the issue was around our domain controllers being virtual.  This is an issue because by default VM’s get their time from the Hyper-V host instead of the PDC emulator on the domain as they should be.  Basically, they needed to uncheck the box which says that the VM get’s their time from the Hyper-V host.

Update 5/30/2013

It seems like this issue has gotten a lot better but it still pops up every once in a while.  My infrastructure team pointed me to this blog article:  http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/11/19/time-synchronization-in-hyper-v.aspx.  It states that “we put logic in our integration service that will not change the time if the virtual machine is more than 5 seconds ahead of the physical computer”.  Has anyone else come across this issue and resolved it completely?