While I’ve recently been writing about creating Node API endpoints, sometimes you find yourself on the other end – not building the data source, but consuming it.
One of our legacy vendors on the trade desk prefers to deliver its data through the SQL Server Reporting Services, which is not exactly what anyone with serious chops would consider a legitimate delivery system for business-critical data, but for now it’s what we have.
To its credit, SQL Server Reporting does provide a number of ways to export its data into multiple formats that can be used in a variety of contexts, but for the most part these processes require a) manual interaction with the reporting page and b) the report creator’s cooperation in enabling the feature.
Fortunately we at least had access to the ATOM feed option.

We were left, however, with the question of how to pull this data into our own systems in a regular and automated way. I found that while it is possible to pull this data into Excel, it ultimately is far more scalable to write an XML reader in C# to parse the data for you.
#1 – Use Excel Data Connections
One way to get at the data in a more dynamic way has been covered elsewhere before. This avenue is best for the non-technical user, when data latency issues are a low priority, and for rarely utilized data sources. It has a number of shortcomings, most of which aren’t really worth getting into because they are common problems with over relying on Excel to do real data processing work.
The process in Excel 2013 is not all that dissimilar to what is laid out in the link I mentioned before, but thanks to the modern ‘ribbon’ layout, you have to look in slightly different places. You also will need to choose “From OData Data Feed” in order to consume a SQL Reports feed, not the “XML Data Import” option.

Once you have the data, you don’t want it to go stale. There are a number of refresh options available through the properties menu to regularly refresh your data.

Your data will return as a table, with filters and all the bells and whistles you would expect from Excel. Just expect it to grind on update, like it would for any other sort of mass import.
#2 – Use C# XmlUrlResolver and XmlReader
While the Excel option is clearly a dirty solution, I would refrain from calling it a ‘quick and dirty’ option. Quick implies that this could be easily become a “write once” solution that is repeatedly applied in any number of situations. Excel formulas by their nature fail that test. Additionally, having your data inside of Excel is only good for that singular application – to use it more broadly will mean more work later on.
Fortunately, C# provides a class that makes it pretty easy to consume ATOM feeds in a few quick steps.
#1 – Know your feed path
This is different from the rendered HTML page you can access through SQL Server Reporting. The format for the feed path is:
http://<server name>/ReportServer?%2f<ReportName>rs%3aCommand=Render&rs%3aFormat=ATOM&rc%3aDataFeed=<Identifier>
where ReportName is equal to the ItemPath GET variable from your HTML path with the + character replaced with the %20 string.
For example, if your HTML path had
ItemPath=%2fUserData%2fThe+Profile+Data
your ReportName would be
UserData%2FThe%20Profile%20Data
#2 – Consume Your Feed
To do this, you will use XmlReader, but also need to understand how SQL Server Reporting puts together its data feed.
First, you need to establish some settings in XmlUrlResolver to reach the external resource. For instance, if your machine has access to the feed by virtue of the network, you would only need to include the default credentials.
XmlUrlResolver resolver = new XmlUrlResolver(); resolver.Credentials = System.Net.CredentialCache.DefaultCredentials; XmlReaderSettings settings = new XmlReaderSettings(); settings.XmlResolver = resolver;
You then need to open the connection while including those settings.
XmlReader reader = XmlReader.Create(yourFeed, settings); // Create an XML reader for this file. using (reader) { while (reader.Read()) { // Only detect start elements. if (reader.IsStartElement()) { ...
To read in the data, you need to understand the structure of the feed. SQL Server Reporting constructs each <entry> in a uniform manner (it is an ATOM/XML feed after all). It is essentially the below, though I recommend looking at the spec if you need more specific guidelines.
<entry> <m:properties> <d:valueinreport1>Value1</d:valueinreport1> <d:valueinreport2>Value2</d:valueinreport2> <d:valueinreport3>Value3</d:valueinreport3> </m:properties> </entry>
Thus, reading through the feed is more of an exercise in loop repetition and collection capture than actual querying like you would want to do against a real API.
To snake your way through it, it is probably easiest to simply use a series of switch statements, storing the specific data type however/wherever makes sense for your situation as it comes.
... // Get element name and switch on it. switch (reader.Name) { case "m:properties": if (reader.Read()) Console.WriteLine(reader["name"] + " " + reader.Value.Trim()); break; case "d:item1": if (reader.Read()) Console.WriteLine(reader["name"] + " " + reader.Value.Trim()); break; case "d:item2": if (reader.Read()) Console.WriteLine(reader["name"] + " " + reader.Value.Trim()); break; case "d:item3": if (reader.Read()) Console.WriteLine(reader["name"] + " " + reader.Value.Trim()); break; } } } }
Once the data is collected into some sort of array or object, you can pass it back to perform additional operations.
#3 – Use the Captured Data
Just as a little bonus insight…
To replicate the feel of the Excel data connection option without its shortcomings, you can create a UDF library for your Excel installation that outputs your collected data. This makes your data available on command, rather than requiring individual configurations, which the Excel data connection route would require. It would also have the added benefit of a single collection set being used/held in memory in situations where multiple workbooks are drawing from the same source. These features, among others, significantly improve the durability and portability of your spreadsheets.
You’ll also need to implement a means to meter your updates in order to refresh your data sets.
I recognize that this sort of topic is a bit inside baseball, but in working on this problem I found very little information out there on SQL Server Report Services and its ATOM Data Feed feature – even many SQL Server experts in my network that were unfamiliar with the feature. It’s clearly an edge case that most of us hope we never have to face, but in the event that you do, I hope this guide helps. If you run into some walls, feel free to reach out either below or on Twitter.
Thank you for this post. I recently ran across this delivery type while troubleshooting a buggy ssrs portal. Do you know if this type of data feed has any unintended consequences? Do you know if pulling large amounts of data (250 mgb) through this delivery method can cause issues for SSRS?
Awesome post, Mat, thank you so much. How do I find out DataFeed=?