We have a SharePoint list with 10,000 items, we have to perform “for each” action on each item one by one and do some processing using Nintex.

Following are our questions:

  • Is there a limit to the number of items that can be collected in the Query List action?
  • Will it handle item counts of 20K, 30k, 50k or more?
  • Also, do we need to be concerned about a time limit for the workflow to complete?  If yes, what is the time limit and can we adjust it for a specific workflow?

Solution

We know that there are some limits to querying lists and its internal SharePoint limitations (about List Thresholds).

It’s configurable at the Central Admin -> Manage Web Application -> Resource Threshold

You could do this using Paging in coding but the Query List action doesn’t currently support it.  So you’d have to query the list using a web service call -> Lists.asmx -> GetListItems.

Vadim developed a Nintex UDA and found its working. Kindly download those from the below links.

https://docs.google.com/open?id=0BxRP6sYdzK3_ZmxzeVNMNmN4d28

There are some hard coded values in the workflow, and the UDA expects a constant credential named Farm Admin.

It’s a very useful one, you can use whenever you are querying Large Lists using Nintex.

For more details visit : http://vadimtabakman.com/nintex-workflow-querying-a-list-with-paging.aspx

Important Note:

I tried with 6000 records ( Id starting from 6001 to 12000) in my SharePoint list. To monitor I am printing the Item Id in the log history action.

In the workflow history I can see all the 6000 item ids printed. But the status of the workflow its showing as “Failed on Start (retrying)” . To avoid this find the solution in the important note section.

For your information, find the history of the last page below: nintex-largelist

Important Note:

I have seen this issue when trying to start a workflow on many items at once.  SharePoint can’t handle the huge load.  It has an internal limitation of 15 executing actions at a single time.  In most cases, that is fine, because most workflows tend to sit and wait and that is not considered executing.

The other situations I have seen this, is when a single workflow tries to do too much at one time.  It basically hits some type of time-out  So to avoid this put some logic into the workflow, that it does a 1 minute delay after every 100 records or so.  This will solve the problem.