Data Toolkit

Version 2.0.0

Last Updated Jun 30, 2018

Table Of Contents

SQL Endpoints



All endpoints will be accessed by a URL such as, where XYZ is what you define the endpoint URL (Value) to be. Any hostname that resolves to your Rock site can be used. To gain the benefits of tracking the currently logged in person you will want to use the same hostname that your user is logged into. The easiest way to achive this is just make your AJAX url /Webhooks/SQLEndpoint.ashx/XYZ.

So you know how you can setup routes to Rock pages, like /Person/{PersonId} and it will go to that specific person's page? You can do the same with these SQL Endpoints. So if you set the Value of the endpoint to /attendance/{MetricId} then requesting the endpoint at /Webhooks/SQLEndpoint.ashx/attendance/123 will provide a SQL parameter named @MetricId for your SQL statement with the value of 123.


99% of the time you can probably just leave this blank and everything will work the way you expect. But if you want to limit this SQL to only execute for a specific HTTP Verb, you can specify that verb here. Normally, endpoints that are asking for data from the database use GET and calls that are adding data to the database use POST or PUT. The DELETE verb is used, as you can guess, to delete data from the database. But there is nothing preventing you from running a SQL DELETE statement with a GET verb.


This is, of course, where you can define your SQL to be executed. There will always be two parameters defined for you.

  • @PersonId - If the user requesting access to this endpoint is logged into Rock then this will contain the Id of that Person. Otherwise it will contain the value 0.
  • @Body - If you use a POST, PUT or PATCH request then you are probably sending data along with the request. This data will be made available as-is in the form of the @Body parameter. See below for some creative uses of this value.

Security Roles

Security is important. You want to be able to limit access to specific data. Defined Values do not natively support security settings so instead you can select the security roles that you want to limit access to. If you leave this field blank then all users will be able to access it. The user must match only one of the security roles selected to be granted access.

One gotcha is that there is currently no way to specify "any authenticated user". You either have to allow full anonymous access or limit to a security role.

Optional Parameters

This is where things start to get juicy. You'll want to look over some of the examples to see a bigger picture of how to make use of these, but we'll cover some of the technical details right now. Required Parameters are those that you define in the Value as part of the route. Optional Parameters are those you want to allow the request to specify via query string parameters. As such, they are optional but will always be defined for your SQL code. If they are not provided in the URL then the default value you specify will be used instead.

To make life easier for you, we do a little automatic conversion on optional parameters. If the value can be converted to a decimal or integer value then it will be converted before being passed along to your SQL. This means if you pass 123 as a query string parameter, it will be converted to an integer value of 123. If, however, you specify a value of 123g it will be left as a string because it cannot be converted to either an integer or decimal.

One final feature of optional parameters is that they can be used to receive multiple values. For example, say we want to allow the requester to specify which schedule's to limit the attendance data to, but we want them to be able to specify multiple schedules at one time. In SQL syntax this would be WHERE ScheduleId IN (@ScheduleId). If you specify a parameter multiple times in the query string it will be processed as an array of values. So if you sent a query string of ?ScheduleId=4&ScheduleID=7 then you would effectively end up with this SQL: WHERE ScheduleId IN (4,7).

Note: When specifying the names of the optional parameters do not include a leading @ character. That will be added automatically.


Weekend Service Attendance

Okay, lets assume you have a number of metrics for holding Adult Attendance, High School, Middle School, etc. all the way down to Nursery. Let's call these Metric Ids 720, 721, 722, 723 and 724. Each metric has two partitions. One for campus and one for Schedule. We want to create an endpoint that allows us to retrieve attendance data from those metrics.

Attendance Endpoint

    [MV].[YValue] AS [Value],
    [MV].[MetricValueDateTime] AS [DateTime],
    [C].[Name] + ' ' + [S].[Name] AS [Name],
    [C].[Name] AS [Campus],
    [S].[Name] AS [Schedule]
    FROM [MetricValue] AS [MV]
    INNER JOIN [Metric] AS [M] ON [M].[Id] = [MV].[MetricId]
    INNER JOIN [MetricPartition] AS [MPCampus] ON [MPCampus].[MetricId] = [M].[Id] AND [MPCampus].[EntityTypeId] = 67
    INNER JOIN [MetricValuePartition] AS [MVCampus] ON [MVCampus].[MetricValueId] = [MV].[Id] AND [MVCampus].[MetricPartitionId] = [MPCampus].[Id]
    INNER JOIN [Campus] AS [C] ON [C].[Id] = [MVCampus].[EntityId]
    INNER JOIN [MetricPartition] AS [MPSchedule] ON [MPSchedule].[MetricId] = [M].[Id] AND [MPSchedule].[EntityTypeId] = 54
    INNER JOIN [MetricValuePartition] AS [MVSchedule] ON [MVSchedule].[MetricValueId] = [MV].[Id] AND [MVSchedule].[MetricPartitionId] = [MPSchedule].[Id]
    INNER JOIN [Schedule] AS [S] ON [S].[Id] = [MVSchedule].[EntityId]
    WHERE [MV].[MetricId] IN (720, 721, 722, 723, 724)
      AND [MV].[MetricId] IN (@Metric)
      AND (@Schedule = '' OR [S].[Id] IN (@Schedule))
      AND (@Campus = '' OR [C].[Name] IN (@Campus))
      AND [MV].[MetricValueDateTime] >= DATEADD(MONTH, -@Months, GETDATE())
    ORDER BY [MV].[MetricValueDateTime]

The query up until the WHERE statement should be pretty straight forward. We are doing various joins so that we can get the campus and schedule partitions associated with the metric values. We take that data and return enough information to be able to work with the data and know what campus, schedule and metric the value came from.

For this example, lets assume the route is /attendance and we have defined the following optional parameters and their default values.

  • Metric = 720
  • Schedule = empty string
  • Campus = empty string
  • Months = 12

Now, the first thing we do in the WHERE clause is limit our results to a pre-defined set of metric Ids. This is to prevent the user from passing in another metric Id that we did not intend them to have access to. Then we filter again based on what the user provided, which if they did not provide a Metric value then the default of 720 is used.

The next to clauses further filter along the Campus and Schedule parameters. If the user did not provide any values for these then a blank string is used which the filter uses to indicate "show all". Finally we further filter by limiting to the past few months, which by default is 12.

So, to make this easier let's provide a few sample query strings and the resulting WHERE clause. I'm going to specify the WHERE clauses with the parameter values resolved, but rest assured they are actually passed as parameters so there is no worry about the user trying to specify a DROP statement hidden in there somewhere.

WHERE [MV].[MetricId] IN (720, 721, 722, 723, 724)
      AND [MV].[MetricId] IN (720,721)
      AND ('' = '' OR [S].[Id] IN (''))
      AND ('Houston' = '' OR [C].[Name] IN ('Houston'))
      AND [MV].[MetricValueDateTime] >= DATEADD(MONTH, -12, GETDATE())

As you can see, we filter down to values from the Metrics 720 and 721, and only from the "Houston" campus.

WHERE [MV].[MetricId] IN (720, 721, 722, 723, 724)
      AND [MV].[MetricId] IN (720)
      AND ('' = '' OR [S].[Id] IN (''))
      AND ('Houston' = '' OR [C].[Name] IN ('Houston', 'Antonio'))
      AND [MV].[MetricValueDateTime] >= DATEADD(MONTH, -24, GETDATE())

In this query, we are using the default metric Id of 720. But we are requesting values for the campuses "Houston" and "Antonio". We also requested 24 months of data instead of the default 12 months. If you will notice, the campus line checks for 'Houston' = '' even though we specified two values. This has to do with the way we provide the values and detect uses of multi-value checks like the IN keyword. If you want to know the specifics of why this happens, you can read the technical reasons in a later section.

WHERE [MV].[MetricId] IN (720, 721, 722, 723, 724)
      AND [MV].[MetricId] IN (720, 721, 722)
      AND (3 = '' OR [S].[Id] IN (3,4))
      AND ('' = '' OR [C].[Name] IN (''))
      AND [MV].[MetricValueDateTime] >= DATEADD(MONTH, -12, GETDATE())

In this final query, we want the values of 3 metrics and 2 schedules. For example, these two schedule Ids might coincide to our Sunday morning services, excluding the Saturday night service.

Update Single Social Media Link

Since you are not limited to simply retrieving data from the database, here is an example of using an INSERT and UPDATE statement to set the value of a person's attributes.

Social Media Endpoint

IF @AttributeId IN (969, 970, 971) AND @PersonId != 0
    IF EXISTS (SELECT * FROM [AttributeValue] WHERE AttributeId = @AttributeId AND EntityId = @PersonId)
        UPDATE [AttributeValue]
            SET [Value] = @Body
            WHERE [AttributeId] = @AttributeId AND [EntityId] = @PersonId
        INSERT INTO [AttributeValue]
            ([IsSystem], [AttributeId], [EntityId], [Value], [Guid])
            (0, @AttributeId, @PersonId, @Body, NEWID())

First off, we check to make sure the specified attribute is an allowed one. On our system, those three attribute Ids are for the Twitter, Facebook and Instagram attributes. We also check to ensure the user is logged in. Next we check if an AttributeValue already exists. If so then we issue an UPDATE statement to update the existing value. If not then we issue an INSERT statement to create a new attribute value.

Now notice the @Body parameter is being used but hasn't been defined. This is defined for every request and contains any data sent along with a POST, PUT or PATCH request. Let's examine a jQuery request to update the Twitter attribute.

$.post('/Webhooks/SQLEndpoint.ashx/socialMedia/970', 'myTwitterAccount');

For this request, the @AttributeId will be 970 and the @Body will be myTwitterAccount.

Update All Social Media Links

So we just showed how you can update a single attribute at a time. What if we wanted to update all three with a single request? If you are running SQL Server 2016+ or Azure SQL then you can do just that.

Social Media Endpoint

IF EXISTS (SELECT * FROM [AttributeValue] WHERE AttributeId = 969 AND EntityId = @PersonId)
    UPDATE [AttributeValue]
        SET [Value] = JSON_VALUE(@Body, '$.Twitter')
        WHERE [AttributeId] = 969 AND [EntityId] = @PersonId
    INSERT INTO [AttributeValue]
        ([IsSystem], [AttributeId], [EntityId], [Value], [Guid])
        VALUES (0, 969, @PersonId, JSON_VALUE(@Body, '$.Facebook'), NEWID())

IF EXISTS (SELECT * FROM [AttributeValue] WHERE AttributeId = 970 AND EntityId = @PersonId)
    UPDATE [AttributeValue]
        SET [Value] = JSON_VALUE(@Body, '$.Facebook')
        WHERE [AttributeId] = 970 AND [EntityId] = @PersonId
    INSERT INTO [AttributeValue]
        ([IsSystem], [AttributeId], [EntityId], [Value], [Guid])
        VALUES (0, 970, @PersonId, JSON_VALUE(@Body, '$.Twitter'), NEWID())

IF EXISTS (SELECT * FROM [AttributeValue] WHERE AttributeId = 969 AND EntityId = @PersonId)
    UPDATE [AttributeValue]
        SET [Value] = JSON_VALUE(@Body, '$.Twitter')
        WHERE [AttributeId] = 971 AND [EntityId] = @PersonId
    INSERT INTO [AttributeValue]
        ([IsSystem], [AttributeId], [EntityId], [Value], [Guid])
        VALUES (0, 971, @PersonId, JSON_VALUE(@Body, '$.Twitter'), NEWID())

So what we have done is taken the guts of the previous example and duplicated it a few times. Then we modified each one to be for a hard-coded attribute. Notice we are now using a statement such as JSON_VALUE(@Body, '$.Twitter') to get our value. This is a new function available in SQL Server 2016+ and Azure SQL. It lets us process a JSON encoded string (@Body) and extract a single value from the tree path. The path can be pretty complex, so go read up on it and it's related functions!

$.post('/Webhooks/SQLEndpoint.ashx/socialMedia', JSON.stringify({Twitter: 'tw', Facebook: 'fb', Instagram: 'ig'}));

The above creates a JSON object and then converts it to a string format. This JSON-encoded string will be passed to our SQL statement as the @Body parameter. Which, as you saw above, we can then parse on the fly. If we really wanted to be fancy above we could have included various checks to make sure the JSON values existed before making any changes, but this is a simplistic example.

Data View Filters

Family Attendance in Group Types


Family Attendance in Group Types

This filter provides a way for you to filter people based on their family attendance. So it behaves very similarly to the core Attendance in Group Types filter with the following exceptions.

  1. It operates on all members of the family to count attendance.
  2. You can select multiple group types instead of a single one.
  3. Ability to require unique dates for each attendance record.

A note of clarification on the unique dates. This does not mean truly unique dates, but rather service times. See examples below.

  • A family of four attends church at the 9am service. Both children check into difference classes and the parents fill out a welcome form that later results in each getting an attendance record as well. With unique dates turned on this would be counted as a single attendance record.
  • A family of four attends church. The parents and a elementary child attend the 9am service. The last child is in highschool, which only has services at 11am, and thus attends the 11am service. In this case, with unique dates turned on this would be counted as two attendance records.

Giving Frequency

Person Secured

Giving Frequency

This filter provides similar functionality to the core Giving Amount filter, except it allows to you filter by people based on their giving frequency instead of total amounts.

By default, this filter is secured so that only Rock Administrators, Finance Administrators or Finance Users can use it. This means only a person in one of those security roles can build a Data View using this filter. Others can still use (that is see) the Data View results if they have access to the Data View itself, but they will not be able to make changes to the filter criteria.

Recent Logins


Recent Logins

This straight-forward filter allows you to find people who have logged in n number of times in a given date range. A common example, as illustrated above, would be to find everybody who has logged in at least once in the past 4 weeks.

SQL Query

All Entities Secured

SQL Query

This one is pretty straight forward, if you know SQL. Basically, you need to return a single column of Id numbers that will be included in the result set. Let's take a simple example.

    SELECT [Id] FROM [Person] WHERE [NickName] LIKE 'Dan%'

This will include all Persons whose nick name begins with Dan. Granted this can easily be done with existing filters. But there are two use cases for this filter.

  1. Building a Data View that does not have a filter you need.
  2. Building a Data View whose filter is too complex, thus causing a timeout when running.

On the topic of performance. There are some things that happen under the hood you may want to know about. The way this filter functions is to store the Id numbers in the database in a new table and uses a type of SQL Join statement to perform the actual filtering. Storing these rows in the database is quite fast. A SQL Query that returns 100,000 Id numbers takes only about 2.5 seconds to store those numbers. However, it also takes up about 32MB of database space.

This is where the cache time option comes in. By default these filter results will be cached for 10 minutes. This is to help reduce the number of rows added to the database needlessly and also to improve response time when vieiwng a Data View and clicking through the pages of results (each time you click, the entire filter runs again so it would cause a new set of rows to be inserted otherwise). Normally this will probably not be a problem, but you can reduce this cache time to 0 seconds if you need always live results. I just don't recommend it if your filter is returning 100's of thousands of results.

So we are caching things in the database. Well, that Cache Time is also used by a new Job installed for you to clean up these cache records. Whenever those rows "expire" they get cleaned up by the job, which runs every 15 minutes by default. It is important that this job be able to completely clear the table once in a while, so it is suggested you do not set a cache time of over 1 hour if you can help it. Longer cache periods can better be achieved in Rock v8 is released as it will include support for caching the entire DataView result set.

By default, this filter is secured so that only Rock Administrators can use it. This means only a Rock Administrator can build a Data View using this filter. Others can still use (that is see) the Data View results if they have access to the Data View itself, but they will not be able to make changes to the filter criteria.

Export Data

You will find this page under Admin Tools > Installed Plugins > Data Toolkit. This page allows you to export any Rock entity to either a CSV or JSON formatted file. When you first select an Entity Type it will automatically configure all the properties of that entity type as columns to be exported with their default values. Don't worry, you can modify this set of columns. You can re-order the columns, add new columns or remove columns entirely. Once you have your export configured you can even save it as a preset if you think you will use it again later.

Here is an example of the export page that is using a preset to define a sub-set of columns when exporting the Defined Type entity:

Export Data

As you can see, we are only exporting three columns, even though there are more available. Specfically, you will notice that we are not exporting the Id or Guid properties, among some others. Another thing to note is the use of Lava in the Category column. The Defined Type entity stores the Category Id, but that isn't very useful to us. So we use Lava to reference the Category and it's name.

Hopefully that screenshot has wet your appetite for what you an do with this.

Export Options


You can have any number of presets configured, so feel free to add as many as you want. A preset will restore the Entity Type, DataView filter, Export As, and the column list. So your entire configuration can be saved and re-used later without having to remember any specific settings.

Entity Type

This is what you want to export. In our example, we are exporting a list of Defined Types.

DataView Filter

Yep, you read that right. Let's say you wanted to export all the Small Group records. A Small Group is a type of Group, but so is a family and a dozen other things you don't care about. In this case, you would create a Data View that filters your Groups to only those you actually want to export. This isn't just limited to Groups, you could for example create a Data View to filter the Defined Types to a specific category, you are only exporting one categories worth of data.

Export As

This one is pretty straight forward. You can export as either CSV (Comma Separated Values) or as JSON. 99% of the time you are probably going to export as CSV, but if you ever need to export as JSON, you can!


This is where things get really interesting. By default, the list of columns will be populated for you to export all the data for an entity type. You probably don't need everything so if you want to trim things down a bit you can remove columns you don't need. You can also add new columns. For example, say you wanted to have both the Category Id and the Category Name. It's a single property in the database, but you can create two columns, one for each.

As we mentioned, you can use Lava to customize the export. Every item you are exporting is the Row - just like if you were doing a merge template. Therefore, to export the category id of the Defined Type, you would use the Lava {{ Row.CategoryId }}. Since Lava can also traverse objects you can get the category name by using {{ Row.Category.Name }}.

But what if you need to do more? Say you need to supply a default category name if one doesn't exist? Sure thing, just use standard Lava: {% if Row.Category and Row.Category != null %}{{ Row.Category.Name }}{% else %}Default{% endif %}.

In order to make your life easier, we pass the merged Lava data through the Trim() method with removes any whitespace at the beginning or end of the string. This should help when doing complex lava.

One final thing to note about the Lava, there is a block setting which allows you to specify which Lava Commands are enabled for use. By default none are enabled for security reasons, but you can enable any Lava command you want and use it, even the web request (though why you would I don't know).


The Preview button will give you a preview of what the first 10 rows of data will look like. This displays as a Rock grid so it is simply a way to check your Lava and make sure it's doing what you expect before you export out the entire 3,000 items of your selection.

Once you are happy with the preview of the data, click the Export button to begin exporting. You will get an on-screen progress of how many rows it needs to export and how far along it is. When it's done you will be provided with a button to download the data file.

Import Data

You will find this page under Admin Tools > Installed Plugins > Data Toolkit. This page allows you to import any Rock entity from either a CSV or JSON formatted file. This can either be a file generated from the Export Data page or a file that came from a 3rd party solution.

CSV files are the wild west. The format of them is pretty loose, but we think we have covered most of the bases. If you are having issues, you might try the site and see if it thinks your file is valid.

When you first upload a file it will be scanned so that it knows what columns are available. If you click the Sample tab, you can see a sample of the first 10 rows of the file as well as the column names it identified. This can help you to know that your file is (probably) one that can be imported. A sample from our previous export might look like this:

Import Data Sample

Import Settings

The screenshot for this is huge, so I will include it at the end for reference.


You can have any number of presets configured, so feel free to add as many as you want. A preset will restore the Entity Type and the column settings. So your entire configuration can be saved and re-used later without having to remember any specific settings.

Entity Type

This is what you want to import. In our example, we are importing a file of Defined Types that we previously exported.


Things are a bit more hectic on import, from a visual standpoint. Every property of the selected entity will be listed. You cannot re-order them and you cannot delete them. If you do not want a value set then leave the Lava Template blank and it will be ignored.

The Lava Template works the same as it does for the Export page. That is, each row of data being imported is referred to by the lava variable Row. So if you want to reference the Name column, you would use the Lava syntax of {{ Row.Name }}. As with the export page, you can use any valid Lava including filters and entity commands (if you enable them in the block settings).

When you select the Entity Type, it will automatically fill in the Lava Templates for any properties that match a column name in the data to be imported. If you look at the screenshot below, you will notice that it only filled in the Name and Description even though our file included a third column for the Category. The reason for that is the data file used the column name of Category whereas the property name is CategoryId.

If you wanted to import everything into a single category, you could just type in the Id number of the category for the CategoryId property. If you wanted it to be dynamic, you could write up some Lava with Entity Commands to search for a category that matches the name in the Row data.

For another example, lets say you have two files. One for the Categories and one for the actual data. The first file has a unique identifier for the category which is referenced in a column in the data file. On import, you could assign this unique identifier to the ForeignId column. Then when importing the data file you could use entity commands to locate that category and snag the Rock Id of it.


Okay, now back to the fun part. Actually importing the data. Once again there is a Preview button that will give you a preview of the top 10 rows of data that would be imported. This is different than the Sample we looked at previously. This preview is after any Lava merging happens and it shows what the Entity will look like after the data has been imported into it. It's not a perfect preview, but it should again help give you an idea as to whether your Lava is correct.

Next, we recommend you run the Test Import. This is identical to a full import, but it does everything in what is called a "transaction". Once the test import is finished it rolls back the transaction so the database is in the state it was before you imported. This will let you see if there are any errors during the import without actually running it.

Finally, when you are ready to really import everything you can click the Import button and it will begin the import process. This will do everything the exact same as the Test import did, but when it's done it will commit those changes to disk. If any single entity (row) cannot be imported because of an error, the entire import will be aborted - including any rows that were already imported. This means if you are importing 100 rows and row 53 has an error, then 0 (zero, none, not any at all) rows will have been imported to the database. Fix the error and you can try again.

With either a test import or a live import, if an error is encountered it will provide details about what the reported error was as well as a JSON representation of the row that failed to import. This should help you track down the specific row in the file and figure out what is wrong.

Import Data

Note: There is currently no way to skip some rows on import. The entire file will be imported. For example, if you export from another system a CSV file that contains 10 rows and import that you will have 10 entities in Rock. If you then export another file that has 20 rows, 10 of which are the same as was previously exported, and you then import that second file into Rock, you will have 30 entities in Rock. We are considering a method to allow you to detect and skip duplicates but please let us know if that would be useful to you.