403 Forbidden from /_api/contextinfo when using Chrome Postman REST App

tl;dr
The Postman App was sending an Origin header to /_api/contextinfo and that was generating a 403 Forbidden. Using a fiddler rule I removed the Origin HTTP header and the call to /_api/contextinfo endpoint then worked.

I’ve recently been trying to grok the SharePoint online REST API, particularly executing requests that require an HTTP POST and therefore a X-RequestDigest header.

To help me understand the interaction with the REST API I installed the Google Chrome Postman REST App and started to test.

If you open google chrome, login to your Office 365 site, then launch Postman, requests to the RESP API will be sent with the appropriate cookies for FedAuth and rtFA to authenticate.

So all good then, I was able to execute simple GET requests such as https://*myo365site*/_api/web and get back results as expected.

I wanted to start experimenting with the REST APIs for custom permissions that are “documented” here:

https://msdn.microsoft.com/en-us/library/office/dn495392.aspx

So the first thing I needed to do was to get a Request Digest to add as a header to my POST requests from Postman. Of course to get a Request Digest you need to have issued a POST request – but for POST requests you need a X-RequestDigest header – chicken & egg.

The process to follow is to issue a POST request to the https://*myo365site*/_api/contextinfo endpoint with an empty body and two headers:

Accept: application/json;odata=verbose
Content-Length: 0

This “should” return a 200 status code and a body such as the following:


{


 "d": {
 "GetContextWebInformation": {
 "__metadata": {
 "type": "SP.ContextWebInformation"
 },
 "FormDigestTimeoutSeconds": 1800,
 "FormDigestValue": "<FORMDIGESTVALUE>",
 "LibraryVersion": "16.0.4107.1226",
 "SiteFullUrl": "https://*myo365site*",
 "SupportedSchemaVersions": {
 "__metadata": {
 "type": "Collection(Edm.String)"
 },
 "results": [
 "14.0.0.0",
 "15.0.0.0"
 ]
 },
 "WebFullUrl": "https://*myo365site*"
 }
 }
}

Instead I was getting a 403 Forbidden status code with no body. I then started up fiddler to what was being sent through by Postman. Postman was sending through a few extra headers:

POST https://*myo365site*/_api/contextinfo HTTP/1.1
Host: *myo365site*
Connection: keep-alive
Content-Length: 0
Accept: application/json;odata=verbose
Origin: chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm
CSP: active
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36
Accept-Encoding: gzip, deflate
Accept-Language: en-GB,en-US;q=0.8,en;q=0.6
Cookie: WSS_FullScreenMode=false; rtFa=*cookievalue*; FedAuth=*cookievalue*

I copied all of the above headers into fildder’s Compose tab, I then started to remove the additional headers one by one to see if it made any difference.

When I removed the Origin: chrome-extension://fdmmgilgnpjigdojojpjoooidkmcomcm the request succeeded!

I then added a rule to fildder in the static function OnBeforeRequest(oSession: Session) method:

oSession.RequestHeaders.Remove("origin");

And my requests from Postman to /_api/web/contextinfo now succeeded and I was able to obtain the RequestDigest value from the JSOM and use it as the value for the X-RequestDigest HTTP header for subsequent HTTP POST calls to REST endpoints.

Now…should I be leaving this fiddler rule in place “for all requests”?, I don’t know. All I know for now is that for my specific environment this is what I was seeing and I was able to get the call to /_api/contextinfo to execute successfully by removing the origin header.

I suspect that once I start issuing calls that do require the origin header (guessing here but the calls from an AppWeb to a HostWeb for example) then I’ll maybe run into issues. Need to investigate further.

YMMV

Posted in Uncategorized | 6 Comments

Fixing the Save site as a template error when you have provisioned custom site columns in SharePoint 2013

***UPDATE: 18/03/2015***
I’ve updated this article to discuss the specifics of when this issue can arise

I had a report of a user experiencing an error using the Save site as a Template on a straight forward Team Site in 2013.

tl;dr
If you provision a custom site column with the Overwrite=”true” attribute using SPFieldCollection.AddFieldAsXml then the Save site as template function will fail as the underlying code to generate the wsp file will add a duplicate Overwrite=”true” attribute to the site column and therefore generate an xml file error. You need to update any provisioned site column’s SchemaXml property to remove the Overwrite=”true” attribute.

I was able to re-create the error on a test server and began the investigation.

Note: the farm was patched up to August 2013 CU only, but I’ve had a quick check of the codebase in SP1 and it appears that this issue discussed below persists

In ULS I saw the following error for the correlation id:

[Forced due to logging gap, Original Level: Monitorable] System.Xml.XmlException: ‘Overwrite’ is a duplicate attribute name. Line 1, position 327.
at System.Xml.XmlTextReaderImpl.Throw(String res, String arg, Int32 lineNo, Int32 linePos)
at System.Xml.XmlTextReaderImpl.AttributeDuplCheck()
at System.Xml.XmlTextReaderImpl.ParseAttributes()
at System.Xml.XmlTextReaderImpl.ParseElement()
at System.Xml.XmlTextReaderImpl.ParseDocumentContent()
at Microsoft.SharePoint.SPSolutionExporter.WriteXmlToWriter(XmlWriter output, String xml, Boolean skipDocumentElement, Boolean addCdata)
at Microsoft.SharePoint.SPSolutionExporter.ExportFields(SPFieldCollection fields, String partitionName)
at Microsoft.SharePoint.SPSolutionExporter.ExportListsManifest(ListInstancesExportSummaryInfo exportSummary, ModuleExportSummaryInfo moduleExportSummary, List`1 workflowContentTypes, String workflowForm, String serverRelativeworkflowForm)
at Microsoft.SharePoint.SPSolutionExporter.ExportLists()
at Microsoft.SharePoint.SPSolutionExporter.GenerateSolutionFiles()
at Microsoft.SharePoint.SPSolutionExporter.ExportWebAsSolution()

and

System.InvalidOperationException: Error generating solution files in temporary directory.
at Microsoft.SharePoint.SPSolutionExporter.ExportWebAsSolution()
at Microsoft.SharePoint.SPSolutionExporter.ExportWebToGallery(SPWeb web, String solutionFileName, String title, String description, ExportMode exportMode, Boolean includeContent, String workflowTemplateName, String destinationListUrl, Action`1 solutionPostProcessor, Boolean activateSolution)
at Microsoft.SharePoint.ApplicationPages.SaveAsTemplatePage.BtnSaveAsTemplate_Click(Object sender, EventArgs e)
at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument)
at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint)

The first clue here is that Overwrite is a duplicate attribute. And, that part of the stack trace:

at Microsoft.SharePoint.SPSolutionExporter.ExportFields(SPFieldCollection fields, String partitionName)

My first thought was were there any custom site columns on this site, and there were several. The custom site columns had been provisioned first to the farm Content Type Hub and then published to the site via a custom content type.

The custom site columns were deployed to the farm as part of a wsp, created in Visual Studio. The Visual Studio solution contained a few features that included your typical Elements.xml file containing some site columns defined using CAML.

Now if you’re like me you’ll “probably” have got into a habit of adding to your site columns the Overwrite=”TRUE” attribute, this is usually so that your develop, deploy, test workflow is continuously deploying the “most up to date” version of your custom artefacts. Also, it is a “requirement” for certain types of site columns to have Overwrite=”TRUE” as part of their definition.

As an example, the following are typical blog posts stating the requirement of adding the Overwrite=”TRUE” attribute:

https://mdashiqur.wordpress.com/2014/02/09/lookup-field-as-a-site-column-using-caml-sharepoint-lookup-field/

https://andrewonsoftware.wordpress.com/2011/10/27/how-to-define-lookup-column-via-caml-in-sharepoint-2010-and-avoid-errors/

We have established that our test site has custom columns that contain the Overwrite=”TRUE” attribute, but this seems to be causing an issue with Save site as a template.

Taking a step back into memory lane, what Save site as a template actually does is to create a wsp containing all the artefacts required to create a “copy” of a site. This can be very useful for Site Collection owners to define what each subsite should “look like”. Save site as a template is also a useful feature for developers to learn how to “write CAML”, and in fact what a lot of developers used to do would be to start off any custom site definition/web template by first prototyping in the UI, Save site as a template, download the wsp from the Site Collection solution gallery, and then crack open the wsp and add the files into Visual Studio.

The biggest issue with this prototyping approach was that the wsp generated by Save site as a template didn’t create “round-trippable” CAML definitions for fields. A developer would add all the wsp files into their solution, try to deploy it and it would “appear” to have worked but site columns such as Lookup columns wouldn’t get provisioned correctly. So “it is known” that using Save site as a template and Visual Studio together requires more tweaking of the generated files.

Now going back to the stack trace of the error in ULS I opened ILSpy and started to work my way through the code base for the SPSolutionExporter class, specifically on the Microsoft.SharePoint.SPSolutionExporter.ExportFields(SPFieldCollection fields, String partitionName) method.

Here’s what ILSpy decompiles the method to:

// Microsoft.SharePoint.SPSolutionExporter
private void ExportFields(SPFieldCollection fields, string partitionName)
{
	if (fields.Count <= 0 || this.WorkflowExportModeIsEnabled)
	{
		ULS.SendTraceTag(894035u, ULSCat.msoulscat_WSS_SolutionExporter, ULSTraceLevel.Verbose, "There are no fields to export for partition \"{0}\" so the feature will not be included in this solution.", new object[]
		{
			partitionName
		});
		return;
	}
	SPSolutionExporter.FieldsExportSummaryInfo fieldsExportSummaryInfo = new SPSolutionExporter.FieldsExportSummaryInfo();
	foreach (SPField sPField in fields)
	{
		string title = sPField.Title;
		try
		{
			SPSolutionExporter.FieldExportSummaryInfo fieldExportSummaryInfo = SPSolutionExporter.ExportField(sPField, this.web);
			fieldsExportSummaryInfo.FieldExportSummaryInfoEntries.Add(fieldExportSummaryInfo.SortedName, fieldExportSummaryInfo);
		}
		catch (Exception ex)
		{
			string strMessage = string.Format(this.web.UICulture, SPResource.GetString("SitePackaging_ErrorExportingField", new object[0]), new object[]
			{
				title
			});
			ULS.SendTraceTag(894036u, ULSCat.msoulscat_WSS_SolutionExporter, ULSTraceLevel.Monitorable, ex.ToString());
			throw new SPException(strMessage);
		}
	}
	string text = SPSolutionExporter.ConvertWebRelativeUrlToPartitionedRelativePath("ElementsFields.xml", partitionName);
	ULS.SendTraceTag(894037u, ULSCat.msoulscat_WSS_SolutionExporter, ULSTraceLevel.Verbose, "Creating field feature manifest file '{0}'", new object[]
	{
		text
	});
	using (ScopedXmlWriter scopedXmlWriter = new ScopedXmlWriter(this.CreateXmlWriterInStagingArea(text), text))
	{
		using (new ScopedXmlWriterElement(scopedXmlWriter.Value, "", "Elements", "http://schemas.microsoft.com/sharepoint/"))
		{
			foreach (KeyValuePair<string, SPSolutionExporter.FieldExportSummaryInfo> current in fieldsExportSummaryInfo.FieldExportSummaryInfoEntries)
			{
				SPSolutionExporter.FieldExportSummaryInfo value = current.Value;
				SPSolutionExporter.WriteXmlToWriter(scopedXmlWriter.Value, value.SchemaXml);
			}
		}
	}
	string text2 = SPSolutionExporter.ConvertWebRelativeUrlToPartitionedRelativePath("Feature.xml", partitionName);
	fieldsExportSummaryInfo.FeatureFileRelativePath = text2;
	ULS.SendTraceTag(894038u, ULSCat.msoulscat_WSS_SolutionExporter, ULSTraceLevel.Verbose, "Creating fields feature file '{0}'", new object[]
	{
		text2
	});
	using (ScopedXmlWriter scopedXmlWriter2 = new ScopedXmlWriter(this.CreateXmlWriterInStagingArea(text2), text2))
	{
		using (new ScopedXmlWriterElement(scopedXmlWriter2.Value, "", "Feature", "http://schemas.microsoft.com/sharepoint/"))
		{
			SPSolutionExporter.WriteXmlAttribute(scopedXmlWriter2.Value, string.Empty, "Id", null, fieldsExportSummaryInfo.FeatureId);
			SPSolutionExporter.WriteXmlAttribute(scopedXmlWriter2.Value, string.Empty, "Title", null, "Fields feature of exported web template \"" + this.web.Title + "\"");
			SPSolutionExporter.WriteXmlAttribute(scopedXmlWriter2.Value, string.Empty, "Version", null, "1.0.0.0");
			SPSolutionExporter.WriteXmlAttribute(scopedXmlWriter2.Value, string.Empty, "Scope", null, "Web");
			SPSolutionExporter.WriteXmlAttribute(scopedXmlWriter2.Value, string.Empty, "Hidden", null, true);
			SPSolutionExporter.WriteXmlAttribute(scopedXmlWriter2.Value, string.Empty, "RequireResources", null, true);
			using (new ScopedXmlWriterElement(scopedXmlWriter2.Value, string.Empty, "ElementManifests", null))
			{
				using (new ScopedXmlWriterElement(scopedXmlWriter2.Value, string.Empty, "ElementManifest", null))
				{
					SPSolutionExporter.WriteXmlAttribute(scopedXmlWriter2.Value, string.Empty, "Location", null, Path.GetFileName(text));
				}
			}
		}
	}
	ULS.SendTraceTag(894039u, ULSCat.msoulscat_WSS_SolutionExporter, ULSTraceLevel.Verbose, "Exported {0} fields into partition \"{1}\".", new object[]
	{
		fieldsExportSummaryInfo.FieldExportSummaryInfoEntries.Count,
		partitionName
	});
}

The key line in this is line 18:

		try
		{
			SPSolutionExporter.FieldExportSummaryInfo fieldExportSummaryInfo = SPSolutionExporter.ExportField(sPField, this.web);
			fieldsExportSummaryInfo.FieldExportSummaryInfoEntries.Add(fieldExportSummaryInfo.SortedName, fieldExportSummaryInfo);
		}

This is the line of code that generates the xml that is then eventually written to disk as an xml file. The SPSolutionExporter.ExportField method calls the SPSolutionExporter.GetFieldSchemaXml which in turn calls the SPSolutionExporter.GenerateSchemaXmlForExport method

internal string GenerateSchemaXmlForExport(bool addOverWriteAttribute, bool removeSealedAttribute)
{
	string text = this.SchemaXml;
	text = SPUtility.RemoveXmlAttributeWithNameFromFirstNode(text, "Field", "Version");
	if (removeSealedAttribute)
	{
		text = SPUtility.RemoveXmlAttributeWithNameFromFirstNode(text, "Field", "Sealed");
	}
	if (addOverWriteAttribute)
	{
		string attributeNameValue = "Overwrite" + "=\"TRUE\"";
		text = SPUtility.AddXmlAttributeToFirstNode(text, "Field", attributeNameValue);
	}
	return text;
}

As we can see from the above the snippet:

	if (addOverWriteAttribute)
	{
		string attributeNameValue = "Overwrite" + "=\"TRUE\"";
		text = SPUtility.AddXmlAttributeToFirstNode(text, "Field", attributeNameValue);
	}
}

The culprit is lines 11 and 12. The Overwrite=”TRUE” attribute is added to each field’s xml EVEN IF IT ALREADY EXISTS. I say that again, even if the current field’s SchemaXml property already contains the Overwrite=”TRUE” attribute, the SPSolutionExporter.GenerateSchemaXmlForExport adds it in again. And this is the reason why the error is being generated, we have custom site columns that contain the Overwrite=”TRUE” attribute and it is these custom site columns that are causing the Save site as a template functionality to fail.

Once this code unwinds eventually an attempt will be made to write an xml document out, and the string passed to the XmlWriter contains an element with two Overwrite=”TRUE” attributes and therefore fails.

If we think about what is happening here, what “I think” is that Microsoft have received feedback that the wsp file created using Save site as a template did not add in the correct “round-trippable” xml on certain types of columns, and so they have added in this code to the SPSolutionExporter class to mitigate against the scenario of a developer using the UI to prototype, save as a template, and add the wsp files into Visual Studio.

Unfortunately is seems that the very people they’re trying to help – developers – are most likely going to be impacted by this bug as it is developers who will be creating custom site columns in CAML (and migrations from 2010 will almost certainly have custom site columns defined via CAML).

***UPDATE: 18/03/2015***
I’ve identified that the cause of this issue is when custom columns are added using the SPFieldCollection.AddFieldAsXml method and not as a declaritive field via a feature. The codebase for SPFieldCollection.AddFieldAsXml must be taking the xml passed to it and copying it to the SPField.SchemaXml property.

Now that we have identified the cause, is there a solution to this issue? The first solution would be to never deploy any site columns that are defined via CAML and contain the Overwrite=”TRUE” attribute.

But that’s not perhaps a pratical solution, there are already many site columns defined in this way, and it might not be feasible to re-deploy these site columns (in particular, a lot of farms will have deployed custom content types to the content type hub, and the admins of these farms may well have scenarios where they cannot re-publish the content types).

Also, (and I’ve not tested this) it might not be possible to get custom site columns deployed if they are defined with CAML and they are of type lookup.

Going back to the codbase of SPSolutionExporter, SPField.SchemaXml property is what is used to build up the xml that will eventually be written to disk:

internal string GenerateSchemaXmlForExport(bool addOverWriteAttribute, bool removeSealedAttribute)
{
	string text = this.SchemaXml;
	text = SPUtility.RemoveXmlAttributeWithNameFromFirstNode(text, "Field", "Version");

So is there a way for us to (1) Identify the fields that are causing the error and (2) update the fields to remove the error. The answers are yes and yes. The following Powershell snippet can be used to identify fields that will cause the error:

$w = Get-SPWeb -Identity https://test.company.internal/sites/testsite
$w.Fields | ?{$_.SchemaXml -like "*Overwrite=*"} | select Title, StaticName | ft

And the following powershell can be used to update the SchemaXml property to “remove” the issue:

$w = Get-SPWeb -Identity https://test.company.internal/sites/testsite
$fs = $w.Fields | ?{$_.SchemaXml -like "*Overwrite=*"}
foreach($f in $fs){
    $f.Title
    $f.SchemaXml = $f.SchemaXml.replace("Overwrite=`"TRUE`"", "")
}

You can run this on an individual site to allow you to save the site as a template. If you have a content type hub, then run the snippet against the content type hub, but then (importantly) make sure you publish all content types that use the custom site columns.

YMMV

 

 

 

 

 

 

 

 

Posted in CSI SharePoint | 2 Comments

SharePoint 2013 error after creating a view

There have been several reports of users experiencing an error when creating a view with an out the box SharePoint 2013 Team Site.

tl;dr 
Requests made via a load balancer that strips out the Accept-Encoding header cause an error creating a view on a SharePoint 2013 site with MDS activated.

My current environment is SharePoint 2013 with up to the August 2013 CU applied, on a Windows 2012 server, and I too had reports from users that when they were creating a view on a document library that they would see a page with the details:

Error
Cannot complete this action.
Please try again.
Troubleshoot issues with Microsoft SharePoint Foundation.
GO BACK TO SITE

The view was created successfully, but the user is presented with this error page and that is not something we want to be seeing.

Of interest here is that the url of the page ends with (more on this later) and the fact there is no correlation id presented either:

/_vti_bin/owssvr.dll?CS=65001

A SharePoint 2013 Team Site has the MDS (Minimal Download Strategy) feature enabled by default, and I had read that with the MDS feature deactivated that the creation of a view would succeed. So to test I created a simple team site, tried creating a view on the Documents library. Was presented with the error as above. I then deactivated the MDS feature on the site, tried creating the view and this time it succeeded.

So, we have a scenario where it appears that deactivating the MDS feature removes the error from creating a view. I was not satisfied with this as the MDS feature is not something I think we should just deactivate, and so started down a route to see if there was any errors reported on the server.

In our environment we use a load balancer in front of SharePoint (F5 BigIP version 11.3.0 build 3144.0 Hotfix HF8) so the first thing I wanted to do is to point my browser directly at a specific SharePoint server so I examine the logs on that server.

So I modified my local HOSTS file to point my SharePoint hostnames directly at a server, bypassing the load balancer. And…I no longer experienced the error!

So, it would appear that the load balancer was “getting in the way” of the request to create a view. The next thing I did was fire up fiddler and execute a request first via the load balancer (and therefore see an error) and then a request bypassing the load balancer (with no error) and examine the raw request and responses.

The key request to examine was the POST request to the endpoint:

/_vti_bin/owssvr.dll?CS=65001

Both the request and response bodies were almost identical, but for the request via the load balancer fiddler in fact reported a protocol violation. Further examination of the response , the response headers for the load balancer seemed strange, the header:

Content-Length: 0

Was causing the protocol violation as the response does in fact have a body. Of note here is that the response code with MDS activated is a 200 with a response body containing some | separated values including the view page url to redirect to.

The scenario mentioned earlier where we deactivate the MDS feature and create a view with no error, the fiddler trace shows that the POST request to owssvr.dll is met with a 302 redirect response (so the code path inside the owssvr.dll must choose a different response type based on the fact that is has received a MDS request.) All MDS requests have an HTTP form variable:

_MINIMALDOWNLOAD=1

This must be used by the code path inside owssvr.dll to decide if the response code is a 200 with a response body, or a 302 with no response body (more on this at the end of this article).

Now, from what I can tell the fact that the response contains a Content-Length: 0 header I’m guessing the client side code on the browser “gets it’s knickers in a twist” as we would say in Scotland. And that is why the user is presented with an error as the browser then attempts to make subsequent incorrect requests (the subsequent incorrect requests are not the issue, it’s the response to the POST to the owssvr.dll that is the issue).

I then went to our load balancer team and we setup a temporary load balancer configuration where the load balancer would only target one server (just to isolate out the investigation of log files to one server only).

On the single server that the load balancer was pointing to I started wireshark to capture the incoming requests. I then repeated my earlier set of steps: make a request to the server directly by bypassing the load balancer (no error) and make a request via the load balancer (see an error). I then started to examine the request as captured by wireshark to compare both.

The difference between the requests was as follows, for the POST to owssvr.dll the request that bypassed the load balancer had the following header:

Accept-Encoding: gzip, deflate

The request sent via the load balancer did not have this header. Of note here is earlier when I was examining the requests via fiddler the Accept-Encoding header was present in both, so it would appear that the load balancer is removing the Acccept-Encoding header.

We then proceeded to look at the load balancer configuration and lo and behold there is a setting:

Keep Accept Encoding

And this request was disabled (and from what I understand is the default setting) so the Accept-Encoding header is not sent on by the load balancer.

We then enabled the Keep Accept Encoding header and re-tested – and this time creating a view with MDS enabled and going via the load balancer succeeded.

Further examination of the headers in fiddler and wireshark showed that the Accept-Encoding header was sent through as part of the request and that the Content-Length header is returned with the correct value.

So this would appear to be a bug in the code for owssvr.dll, specifically concerning the fact that there is no Accept-Encoding header present for the code path that handles the MDS requests. It would appear that the owssvr.dll response to a request with no Accept-Encoding header is to send back a Content-Length: 0 header, even though it does in fact send back a response body.

At the browser end of things, I can only assume that the low level error handing in the XMLHttpRequest object (MDS makes use of XMLHttpRequest to make server side calls) has found a Content-Length: 0 header and this does not match up with the fact that there is actually a response body and it then tries to make the request again using an HTTP GET to owssvr.dll which the server responds to with an error.

Going back to the start of the article, my current patch level is August 2013 CU, we do have a plan to apply SP1 but just not gotten around to this.

Would be useful to know if anyone else has experienced the same issue with SP1 applied.

YMMV

Posted in Uncategorized | 3 Comments

SharePoint 2013 Audit Log Trimming – remember to edit your timer jobs to fit your log retention

SharePoint 2013 offers the option of audit logs:

Configure audit settings for a site collection

One of the settings available is to trim the audit logs and optionally save the trimmed data into a library. As the above article states the default is every 30 days, but you can change the log retention to some other value.

If you do want to change your log retention to something other than the default (say 7 days) then you might find that your logs don’t appear to be getting trimmed. Make sure that you also set the schedule of the corresponding timer job to match your retention schedule.

Each web application will have its own Audit Log Trimming job. The default schedule for the timer job is monthly, so if you want to have a retention of 7 days then set the schedule of the timer jobs to weekly and test the results.

YMMV

Posted in Uncategorized | Leave a comment

SharePoint 2013 Content Search Web Part and filtering on User Profile Properties

I am working on a new SharePoint 2013 implementation for an organisation that has a particular desire to have as much personalised content as possible on the Intranet home page. The idea is to have say one main area on the home page for company news, then to have other areas that surface news targeted for Your Department and Your Job Title.

In order to achieve this we need a publishing site, Content Types that have Site Columns of the type Managed Metadata that are mapped to the People->Department and People->Job Title term sets. Then, we simply add some Content Search Web Parts to the publishing site home page and configure the KQL to return only pages that have matches for the appropriate User Profile property.
Note: I’m going to assume that you’ve got the farm setup with a functioning Managed Metadata service, Search service (with continuous crawling configured), and a User Profile service that is populated with several users all of which have values in their Department and Job Title properties.

The corresponding Managed Metadata People-Department and People-Job Title term sets should also have the values populated from the User Profile service.

In my examples below I am using Departments such as Finance and Product Development, and Job Titles such as Finance Manager, Accountant, Senior Project Manager, Project Manager and Developer. If you don’t have the exact same values in your user profiles then make sure you substitute the values in step 14 below with values form your profiles.

1. In Central Admin, create a site collection based on the Publishing Portal site definition.

2. Once the site collection is created, browse to the site.

3. Give Everyone permission to at least read the site.

4. Go to Site Actions->Site Settings->Site columns and create the following Site Columns:

Name: FilterOnDepartment
Type: Managed Metadata
Term Set: People->Department

Name: FilterOnJobTitle
Type: Managed Metadata
Term Set: People->Job Title

5. Go to Site Actions->Site Settings->Site content types and create a Content Type using the following settings:

Name: FilterOnUserProfile
Parent Content Type: Article Page (from the Page Layouts Content Type parent)

6. Add the following existing site columns to the FilterOnUserProfile content type:

FilterOnDepartment
FilterOnJobTitle

7. Go to Site Actions->Site Settings->Site libraries and lists
8. Click on Customise “Pages”
9. Click on Add from existing site content types
10. Add the following content type:

FilterOnUserProfile

11. Select Site Actions->Site Contents
12. Click on Pages
13. From the Ribbon, select the FILES tab
14. You will now add a series of pages by clicking on the New Document drop-down and selecting FilterOnUserProfile for each of the following values (once you create the page, you’ll be returned to the Pages library, for the page you just created select Edit Properies):

Title: Page For Department Finance
Page Layout: (Article Page) Body Only
(once page is created and you’ve clicked on Edit Properties)
Content Type: FilterOnUserProfile
Comments: This page has a FilterOnDepartment column value of Finance.
FilterOnDepartment: Finance

Title: Page For Department Product Development
Page Layout: (Article Page) Body Only
(once page is created and you’ve clicked on Edit Properties)
Content Type: FilterOnUserProfile
Comments: This page has a FilterOnDepartment column value of Product Development.
FilterOnDepartment: Product Development

Title: Page For Job Title Developer
Page Layout: (Article Page) Body Only
(once page is created and you’ve clicked on Edit Properties)
Content Type: FilterOnUserProfile
Comments: This page has a FilterOnJobTitle column value of Developer.
FilterOnJobTitle: Developer

Title: Page For Job Title Finance Manager
Page Layout: (Article Page) Body Only
(once page is created and you’ve clicked on Edit Properties)
Content Type: FilterOnUserProfile
Comments: This page has a FilterOnJobTitle column value of Finance Manager.
FilterOnJobTitle: Finance Manager

Title: Page Finance and Accountant
Page Layout: (Article Page) Body Only
(once page is created and you’ve clicked on Edit Properties)
Content Type: FilterOnUserProfile
Comments: This page has a FilterOnJobDepartment column value of Finance AND a FilterOnJobTitle column value of Accountant.
FilterOnDepartment: Accountant
FilterOnJobTitle: Accountant

15. Once you’ve created all of the pages you need to check in and publish a major version of each page.

16. Now you’ll need to wait for search to crawl the newly added content. If you have continuous crawling then you’ll need to wait at the most 15 minutes. If you don’t have continuous crawling you’ll need to execute a full crawl.
Note: If you’re impatient and want to know if the continuous crawl has found your new pages, in Central Admin go to the Search Service application, click on Search Schema and then enter owstaxid into the Managed property filter and click the -> button. You should see your site columns.

17. Once you’re happy that the crawl has finished you now need to check that there is managed properties created and populated with the values, browse to the home page of your publishing site and enter the following search criteria into the Search this site box:

owstaxidFilterOnDepartment:Finance
(for the site columns you created in step 4 a new managed property with the name owstaxid will have been created and populated by the crawl)

18. If you are seeing results from the previous search then you can proceed with the next steps of adding the web parts to the home page of the publishing site, if you’re not seeing results from the search then spend some time examining your search configuration and maybe do a full crawl.

19. Browse to the home page of the publishing site
20. Choose Site Actions->Edit Page
21. In the Page Content area edit to your taste (I add the text “This page will allow you to see articles based on your User Profile values for Department and Job Title.”
22. Delete all the existing web parts from the Top Left and Top Right zones.
23. In the Top Left zone click Add a Web Part and add a Content Search web part (from the Content Rollup category).
24. For the Content Search web part you just added select the Edit Web Part menu option.
25. In the Web Part editor on the right, click the Change Query button.
26. You will now be in the Build Your Query dialogue with the BASICS tab selected. Click the Switch to Advanced Mode option.
27. In the Query text box clear all the content and add the following:
owstaxidFilterOnDepartment:{User.Department}
Note: In a production environment you’d want to add several other search criteria here such as the content type but for the purposes of this small demo this query text will suffice.

28. Click the OK button
Note:If you’re carrying out all of these steps as say the setup account (like me) then you might be tempted to click the Test Query button in the previous step. If you do you’ll probably not see any results being returned unless the setup account has a Department value set in it’s User Profile.

29. In the Web Part editor on the right, in the Display Templates section choose the Two lines option from the Item drop down.
30. Expand the Property Mappings section, select the Change the mapping of managed properties… check box and then select the CommentsOWSMTXT option from the Line 2 drop down.
31. Expand the Appearance section and enter Department News for the Title and then select the Title Only option from the Chrome Type drop-down.
32. Click the OK button.
33. Save, check-in and publish the page.
34. Now log in as a user from the Finance department and check to see if you see two articles in the Department News web part.
35. You can now add another web part to the Top Right zone, this time you’re going to set the filter for Job Title. The key differences from the previous steps are for the Query Text use the following:
owstaxidFilterOnJobTitle:{User.SPS-JobTitle}
36. Finally, add a another web part to the Header zone, this web part will display news for Department OR Job Title (just to demonstrate that you can combine results). The Query Text to use is:
(owstaxidFilterOnDepartment:{User.Department} OR owstaxidFilterOnJobTitle:{User.SPS-JobTitle})
Note: The default operator is AND and so you *could* change this query to reduce the articles presented by removing the OR.

YMMV

Posted in Uncategorized | Leave a comment

Number Mysteries – Considering Bases Exercise

Here’s a sample C# program for working out the Considering Bases exercise:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace NumberMysteries
{
	/// <summary>
	///  Wrapper for the Number Mysteries course.
	/// </summary>
	class Program
	{
		/// <summary>
		/// Standard entry point.
		/// </summary>
		/// <param name="args">Ignored.</param>
		static void Main(string[] args)
		{
			int knutsInSickle = 29;
			int sicklesInGalleon = 17;
			int knutsInGalleon = knutsInSickle * sicklesInGalleon;

			int answerA = (1 * knutsInGalleon) + (14 * knutsInSickle) + 3;

			int numGalleons = 1509 / (knutsInGalleon);
			int knutsLeftOver1 = 1509 % (knutsInGalleon);
			int numSickles = knutsLeftOver1 / knutsInSickle;
			int knutsLeftOver2 = knutsLeftOver1 % knutsInSickle;

			Console.WriteLine("For answer A:\nNumber of knuts:{0:d}", answerA);
			Console.WriteLine("For answer B:\nNumber of Galleons:{0:d}, Number of Sickles:{1:d}, Knuts left over:{2:d}", numGalleons, numSickles, knutsLeftOver2);
		}
	}
}

If you want to download the solution then click here.

Posted in Number Mysteries | Tagged | Leave a comment