SSAS Tabular 2016: Direct Query vs. In-Memory


With the release of SSAS 2016 last month I’ve been thinking about the enhanced Tabular Direct Query mode and the trade-offs with In-Memory mode (default). As you’d expect there are many factors to consider in deploying or migrating a model as Direct Query or In Memory so I’ve structured my current notes and thoughts in the linked Word document.

SSAS 2016 Tabular Direct Query vs In Memory

I’ll share greater detail in future posts and may revise my conclusions and evaluation process as I get deeper into the restrictions on DAX and performance comparisons.



  1. Great document, thanks for that. I need to choose between DirectQuery and Import using Power BI and between using SQL Views as Source for Power BI or SSAS Tabular…. Maybe you should write about these choices too 🙂


    1. Thanks Kraaitje. Yes, I should follow up this post’s paper with additional scenarios as you describe and new details as I said I would back in July. I’d like to first write a post about the new DAX functions and then a follow up post on using those functions with SSRS 2016 (SSAS Tabular as source). A full, very detailed comparison of Excel 16′ versus PBI Desktop would also be a good post or paper to follow up today’s sample Power Pivot post. So it will probably be 3-4 weeks before I can write again about Direct Query vs Import.

      Liked by 1 person

  2. would be awesome if you could also detail the difference in terms of hardware requirements for both scenarios, even considering that you needed to know a use case to estimate… anyway we all know how inmemory works but what about directquery? If you have an estimate of 5000 possible end users and a fast appliance supporting the DW, how would you estimate the hardware for the SSAS in DQ?


    1. Hi Luis. Sorry for the slow reply. I’m not aware of any official guidance from MS or real world tests of SSAS DQ at that scale but my thought would be for your scenario (5K users) to focus on fast CPU clock (3Ghz+) for the SSAS DQ server and larger L2 and L3 cache sizes. Memory and disk would be irrelevant but the CPU clock would be needed to quickly translate all the incoming DAX and MDX queries into SQL statements. Caching in DQ models would help as well.

      Assuming the SSAS DQ model and the source appliance are both on-premises, I’d want the source appliance and the SSAS DQ server as close together as possible and to know if the user queries are coming from on-premises as well (local Excel and PBI Desktop?) or the Power BI service and thus going through the gateway. You’d want to monitor the traffic and gateway counters if that’s the case as that, rather than the SSAS DQ server, could be the bottleneck – you would need to scale up your gateway server.

      Maybe more importantly, you want to be able to specify ‘Assume Referential Integrity = True’ on your SSAS DQ fact to dim relationships such that inner join SQL queries are sent to the source appliance rather than outer joins. Ensuring you have simple, efficient fact and dim queries used by your SSAS DQ model is essential. Additionally, you’ll want to be careful to use DAX functions in your measures that have been optimized for DirectQuery – several fairly common DAX functions don’t necessarily translate into efficient SQL queries per my SQL Saturday doc.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s