The Data Insights 2 Day Master Class

I’m super excited to be presenting a Data Insights 2 Day Master Class in Wellington, NZ with my good friend Matt Allington.  This is the first time we’ll be working together to bring our unique strengths to our participants in a joint session format, and it’s going to be AWESOME!

Ad for the Data Insights Masterclass in Wellington NZ

How is the event going to work?

We think you’ll love this.  We’re going to divide our group in two.  You’ll get a one full day with me on Dimensional Modeling, and one full day with Matt, which focuses on the DAX formula language.  These two components are essential to understand when you want to build truly dynamic, scalable and stable data models, and we're going to cover both in detail.

What is covered in the Dimensional Modeling day?

Ken will be looking deeply at how to structure your data for a successful Excel/Power BI data model.  You’ll learn how your data should be shaped, what the data model expects in its tables, and a variety of techniques and patterns to work around common join problems.  Our goal here is very simple: to teach you everything you need to lay the foundation for a data model that will stand the test of time.

But not only will you lean practical hands on techniques to lay this groundwork, you’ll learn the key terminology at play.  By the time you leave this session you’ll be able to identify things like ‘facts’, ‘dimensions’, ‘relationships’, ‘schemas’, ‘slowly moving dimensions’ and much more.  Armed with this knowledge you will be able to not only design your own models properly, but you’ll be able to understand other materials you reference during your career.

As you might expect from one of the world’s leading voices on Power Query, there’s going to be a heavy focus on Power Query in this course.  But it's Power Query with a purpose: to feed a Power Pivot Data Model.

What is covered in the DAX Formula day?

Matt will take you into the world of DAX formulas, exploring how this incredible language can be used to summarize virtually any statistic you want to know.  He’s one of the world’s experts in the DAX language and will teach you not only what you SHOULD do with DAX, but what you SHOULDN’T.

When Is This?

Soon!  It’s going to be hosted in Wellington, NZ on Feb 24 and 25, 2020.  But the good news is that there are still seats available, and we’d LOVE to see you there with us.

How Much and Where Do I Sign Up?

Great questions!  Head over to ExceleratorBI for all those details.

Remove Dynamic Number of Top Rows

Removing the top five rows from a data set is easy in Power Query, but what do you do when the number of rows changes?  There isn’t a built-in Remove Dynamic Number of Top Rows function.  In this post we’ll look at how to set this up.

Illustration of the issue

Assume you have the following report, and you’re only interested in the Cider sales:

With Cider starting in row 9, we’d need to remove the top 8 rows.  That’s fairly easy.  You just need to:

  • Go to Remove Rows -> Remove Top Rows -> 8
  • Promote headers
  • Do whatever else you need to do to the data

But then you get an updated version of the data set, and it looks like this:

Uh oh.  Best case, if you run the previously generated Power Query script, you’ll end up with the following result:

But more likely, if you promoted the clean header row from the original data set, you’ll get a step level error since the revised data set doesn’t yield a “Cider” column when row 1 (shown above) is promoted to header:

Regardless of which one of these scenarios appears worse to you, I think we can agree that neither one is desired.  So how do we make this work on a dynamic basis?

Solution Architecture

The way I approach this issue is to split the job into 3 queries as follows:

Let’s look at how this works in practice…

Query 1:    Raw Data

The purpose of this query is quite simple:

  • Connect to the Raw Data source
  • Perform any preliminary cleanup
  • Rename the query as “Raw Data” (add something descriptive if you have many data sources
  • Set the query to load as a Connection Only query (disable the load in Power BI)

The key thing to note here is that we’re not doing any work to remove top rows beyond things that we know will ALWAYS occur.  We may want to drop columns and other things to reduce our data set, we just don’t want to touch anything we can’t guarantee will be exactly the same when we get updated data.

In the case of the data sample I showed above, I’m just going to connect to the data set and load it as connection only.  (While I could make an argument that the first 3 rows will always need to go, I will get rid of those when filtering to just the cider header anyway.)

Query 2:    Generate the Dynamic Row Number

The next step is to generate the number that will indicates the dynamic number of top rows we are looking for.  Despite the fact that the row which holds our data is changing, this is actually relatively easy once you know how:

  • Right click the Raw Data query -> Reference
  • Go to Add Column -> Add Index Column -> From 0
  • Filter one of the columns to the data you are looking for
  • Right click the [Index] Column -> Remove Other Columns
  • Go to Home -> Keep Rows -> Keep Top Rows -> 1
  • Right click the value in the cell -> Drill Down
  • Rename the query as “HeaderRows”
  • Set the query to load as a Connection Only query (disable the load in Power BI)

You now have a query that will dynamically pick up the number of rows to be removed from the top of the data set before it encounters the text you are looking for.

Step 3:       Remove Dynamic Number of Top Rows

So now comes the magical part:

  • Right click the Raw Data query -> Reference
  • Go to Home -> Remove Rows -> Remove Top Rows
  • Type in the current number of rows to remove (for this example, we’ll assume it is 8 rows)

The formula bar will now be showing the formula =Table.Skip(Source, x ) where x is the value you typed in:

  • Replace the value with “HeaderRows”

CAUTION!  Power Query is case sensitive.  You must spell and case HeaderRows EXACTLY as you did previously.  And if you separated those two words with a space, you need to escape it with hash marks and quotes:  #"Header Rows"

If you’ve replaced everything correctly, you should see that everything still works:

Does it Work?

Here’s what we see when we point RawData to the second data set I showed earlier:

The sample file for this example can be downloaded here.

Using Script Lab to Create a PivotTable with JavaScript

"Microsoft Excel is not a spreadsheet application anymore, it’s a medium to store, present, and analyze data." - Puneet Gogia

The way Excel has evolved in the past few years, it’s almost impossible to work without it in this present world.

Apart from all the stuff you can learn to do in Excel, the thing which I love most is that I can code in it.

Once you know how to use VBA, you can create macros and automate all the activities which you do manually otherwise.

Today, I want to talk about something else. Something powerful! Let’s talk about using JavaScript (TYPESCRIPT) in Excel.

Before we get into this make sure to subscribe to Excelguru's Newsletter to get tips like this in your mailbox.

Script Lab

There’s a whole bunch of APIs which you can use and code with in your Office applications.

But, with Script Lab you can directly write and test your code into your applications. It’s an add-in which you can install from Microsoft’s App Store.

The Script Lab Ribbon

In Script Lab, there are three different sections where you can write JavaScript, HTML, and CSS code.

The Coding Window

Create a PivotTable in Excel using Javascript

The PivotTable is one of the most useful things which you can use in Excel to summarize and analyze your data.

Even though I like using VBA to create a PivotTable, writing code in JavaScript instead is real fun. In this post I’d like to share with you how to do this.

I’ve split it into a 5-steps process and you can download the files from here to follow along:

  1. Insert Worksheets and Data
  2. Enter a Blank PivotTable
  3. Insert a Row
  4. Insert a Column
  5. Add Values to the Pivot Table

1. Insert Worksheets and Data

The very first thing which you need to create a PivotTable is the source data. So, first of all, you need to write the code which adds source data to the worksheet.

Below is the code:

async function setup() {
await Excel.run(async (context) => {
context.workbook.worksheets.getItemOrNullObject("Data Sheet").delete();
const dataSheet = context.workbook.worksheets.add("Data Sheet");
context.workbook.worksheets.getItemOrNullObject("Pivot Table").delete();
const pivotSheet = context.workbook.worksheets.add("Pivot Table");
const data = [
["Farm", "Type", "Classification", "Crates Sold at Farm", "Crates Sold Wholesale"],
["A Farms", "Lime", "Organic", 300, 2000],
["A Farms", "Lemon", "Organic", 250, 1800],
["A Farms", "Orange", "Organic", 200, 2200],
["B Farms", "Lime", "Conventional", 80, 1000],
["B Farms", "Lemon", "Conventional", 75, 1230],
["B Farms", "Orange", "Conventional", 25, 800],
["B Farms", "Orange", "Organic", 20, 500],
["B Farms", "Lemon", "Organic", 10, 770],
["B Farms", "Kiwi", "Conventional", 30, 300],
["B Farms", "Lime", "Organic", 50, 400],
["C Farms", "Apple", "Organic", 275, 220],
["C Farms", "Kiwi", "Organic", 200, 120],
["D Farms", "Apple", "Conventional", 100, 3000],
["D Farms", "Apple", "Organic", 80, 2800],
["E Farms", "Lime", "Conventional", 160, 2700],
["E Farms", "Orange", "Conventional", 180, 2000],
["E Farms", "Apple", "Conventional", 245, 2200],
["E Farms", "Kiwi", "Conventional", 200, 1500],
["F Farms", "Kiwi", "Organic", 100, 150],
["F Farms", "Lemon", "Conventional", 150, 270]
];
const range = dataSheet.getRange("A1:E21");
range.values = data;
range.format.autofitColumns();
pivotSheet.activate();
await context.sync();
});
}

In this code, you have a function called “setup” which does the following things:

  • It will first check if “Data Sheet” and “Pivot Table” worksheets already exist, and if they do exist then it will delete them and insert new.
  • Next, in the “Data Sheet” worksheet it will add a set of data that will be the source data for the PivotTable.

When you run this code you’ll have something like this in your workbook:

Sample Data Set

2. Enter a Blank PivotTable

When you normally build a PivotTable in Excel, it first inserts a blank PivotTable and then you have to add a row, a column, and values manually.

So now you need a code that inserts a blank PivotTable in the “Pivot Table” worksheet. (You have already written the code in the previous section to insert the “Pivot Table” worksheet.)

Below is the code for this:

async function createWithNames() {
await Excel.run(async (context) => {
const rangeToAnalyze = context.workbook.worksheets.getItem("Data Sheet").getRange("A1:E21");
const rangeToPlacePivot = context.workbook.worksheets.getItem("Pivot Table").getRange("A2");
context.workbook.worksheets.getItem("Pivot Table").pivotTables.add("Farm Sales", rangeToAnalyze, rangeToPlacePivot);
await context.sync();
});
}

Now in this part of the code, we have a function called “createWithNames” which works as follows:

  • First of all, it takes range A1:E21 from the “Data Sheet” worksheet where you have the source data.
  • After that, it specifies the cell A2 in the “Pivot Table” worksheet for inserting the blank PivotTable.
  • Finally, it inserts a blank PivotTable.

After running this part of code you’ll have something like this in your “Pivot Table” worksheet:

Blank PivotTable

3. Insert Row Label

Once you insert a blank PivotTable, the next thing is to insert rows in it. Below is the code which can do this for you:

async function addRow() {
await Excel.run(async (context) => {
const pivotTable = context.workbook.worksheets.getActiveWorksheet().pivotTables.getItem("Farm Sales");
// check if the PivotTable already has rows
const farmRow = pivotTable.rowHierarchies.getItemOrNullObject("Farm");
const typeRow = pivotTable.rowHierarchies.getItemOrNullObject("Type");
const classificationRow = pivotTable.rowHierarchies.getItemOrNullObject("Classification");
pivotTable.rowHierarchies.load();
await context.sync();
if (typeRow.isNullObject) {
pivotTable.rowHierarchies.add(pivotTable.hierarchies.getItem("Type"));
} else if (farmRow.isNullObject) {
pivotTable.rowHierarchies.add(pivotTable.hierarchies.getItem("Farm"));
} else if (classificationRow.isNullObject) {
pivotTable.rowHierarchies.add(pivotTable.hierarchies.getItem("Classification"));
}
await context.sync();
});
}

Now in this part of the code, we have a function called “AddRows” which works as follows:

  • First of all, it checks if there are any rows already in the PivotTable (the first three columns of the source data would be matched one by one).
  • After that (as you are considering the first three columns), it will add them as rows one by one.

As we are using an HTML button here, you can use that button to add all the three rows one by one with it. At this point, you have you’ll have a PivotTable with rows like this:

PivotTable with rows added

4. Insert Column Labels

The next thing is to insert columns, and below is the code which you need to use:

async function toggleColumn() {
await Excel.run(async (context) => {
const pivotTable = context.workbook.worksheets.getActiveWorksheet().pivotTables.getItem("Farm Sales");
// check if the PivotTable already has a column
const column = pivotTable.columnHierarchies.getItemOrNullObject("Classification");
column.load("id");
await context.sync();
if (column.isNullObject) {
// adding the farm column to the column hierarchy automatically removes it from the row hierarchy
pivotTable.columnHierarchies.add(pivotTable.hierarchies.getItem("Classification"));
} else {
pivotTable.columnHierarchies.remove(column);
}
await context.sync();
});
}

In this code, you have referred only to the “Classification” column and it works in the following manner:

  • First of all, it checks if the column “Classification” is already there on not.
  • If the column is not already there, then it adds it to the column label.

As we are using an HTML button here, you can add and remove the column label with this code.

At this point, you’ll have a PivotTable with rows and columns like this:

PivotTable with rows and columns added

5. Add Data Values

The key thing in a PivotTable is the values, and with the below code you can add these in:

async function addValues() {
await Excel.run(async (context) => {
const pivotTable = context.workbook.worksheets.getActiveWorksheet().pivotTables.getItem("Farm Sales");
pivotTable.dataHierarchies.add(pivotTable.hierarchies.getItem("Crates Sold at Farm"));
await context.sync();
});
}
/** Default helper for invoking an action and handling errors. */
async function tryCatch(callback) {
try {
await callback();
} catch (error) {
// Note: In a production add-in, you'd want to notify the user through your add-in's UI.
console.error(error);
}
}

In the source data, you have two different columns where you have values:

  • Crates Sold at Farm
  • Crates Sold Wholesale

But here in this code, you have only one column to use as values: the "Crates Sold at Farm". However, you can change it or even use both of the columns as well.

And your completed PivotTable is here:

Completed PivotTable

How to Add this Code into Script Lab

As I have already mentioned, you can also use HTML and CSS in the Script Lab to create a side-pane where you can run and test your code.

Note: In the code, stored on GitHub, I have added a small bunch of code lines to handle errors, here is the download link.

Make sure to install the Script Lab add-on and then follow the below steps to enter this code into the Script Lab:

  • First of all, click on the Code button on the Script Lab tab to open the Code Editor.

Click on Code in the Script Lab tab to open the Code Editor

  • After that, click on the Menu button and select New Snippet.

Adding a New Snippet in the Script Lab Code Editor

  • Next, you need to add JavaScript, HTML, and CSS into the Script (1), HTML (2), and CSS (3) tabs respectively.

Tabs for adding your JavaScript, HTML, and CSS codes in the Code Editor

  • From here, you need to run the code. For this, you need to click on the Run button from the Script Lab tab.

Clicking the Run button in the Script Lab tab will run the code

Once you click on the Run button, the side pane appears, just like below:

The side pane shows the code being run

Because  you have built your code in five parts, you will have five different buttons that execute those codes and create a PivotTable.

You just need to click on the button to execute each step.

Isn’t it COOL?

Make sure to share this post with your friends, I am sure they will appreciate it.

Quick Thing: Apart from learning about and using JavaScript in Excel, there's one more thing which I'm curious to learn these days, and that's Power Query's M language. Don't forget to check out M is for Data Monkey.


About the Author: Puneet Gogia

Puneet is the Co-Founder at ExcelChamps and has been using Excel since his college days. He's helped thousands of people to understand the power of spreadsheets and learn Excel and VBA. You can find him online, tweeting about Excel, on a running track, or sometimes hiking up a mountain.

Column Quality and Distribution Features are Finally Here

If you've worked with Power Query in Power BI, you've seen Column Quality and Distribution features already.  But I'm super exited to say that I've finally received it in the Insider build of Excel!

What are the Column Quality and Distribution Features??

If you're not familiar with these, let's break them down a bit:

Data Quality Indicators in Column Headers

In its most simplistic form, you'll see that you now get an indicator in the header of each column as shown here:

Column quality in Power Query Headers

Notice that red bar at the top of column C?  That's happening because there is an error in the column!  This is SUPER awesome, as we've been teaching for years that you have to scroll down looking for errors each time you change from text to a numeric data type.  This should really help, as the column header will make it obvious that something went wrong.  All green?  You should be good to go.  See red?  You've got potential problems to deal with.  (To be fair, you may still get errors after loading, as it is working with a data preview, but still, this will be a BIG help.)

The nice thing about this is that it is always on.  Nothing needs to be set up to enable this by default.  (And so far as I know, you can't turn it off - but then why would you want to?)

The Column Quality Feature

This one is enabled by going to the View tab and choosing "Column Quality", resulting in a view that looks like this:

Power Query Column Preview Settings

Which adds 3 new lines to the top of our columns:

Column Quality in Power Query

Now, I don't plan on leaving this on all the time. However, the cool thing here is that I can very quickly identify that 20% of my data rows in column C are errors.  And while I don't have any empty rows, it would show me that at a glance if I did.

The Column Distribution Feature

The idea behind the Column Distribution feature is to give you a quick glance at the distribution of values in your columns.  Is there a single value that shows up more than others, or is the data uniformly distributed?  This can help you quickly identify if the data has any consistency to it or not:

Column Distribution in Power Query

Notice that in Column a, we have 3 distinct values (b, d, e), but only one value that is truly unique (b only occurs once in the entire data set).  Meanwhile, in column b, each value occurs in the data set once and once only, with no values appearing more than this.  On a data set this small, this isn't going to be super insightful. But say you are pulling in data that spans your monthly summaries for the last year. If suddenly you see 13 values where you are expecting 12... that's a good indicator you might want to look a little deeper.

Something else that is worth noting is that you don't have to keep the Column Quality on to get the Column Distribution chart.  These features can be used together or individually.

The less obvious Column Profile Feature

The final checkbox in the Data Preview menu is the Column Profile feature.  This one is very cool as it only activates when you select the entire column.  And when you do so, it takes over the preview window providing most of the information from the previous two settings.

Column Profile in Power Query

We've got even more stats (although I wish it also showed the % of values like in Column Quality), as well as the value distribution chart.

What I love about this is that it can be called up on-demand by selecting the column, but doesn't get in my way when I don't need it.

My Thoughts on this Feature

I've been waiting for this to hit Excel for over a year now, and am SUPER happy to see it finally show up.  Personally, I'll be running with Column Profile (the last option) selected, but not Column Quality or Column Distribution. The reason is pretty simple... I want to minimize the amount of info between me and my data.  The Column Profile feature will give me what I need in an on-demand fashion when I see the data colours change at the top of my column. 🙂

How do I Get These Features?

This feature is so hot and fresh that it showed for Office Insiders this morning (v1910 build 12112.x.)  So you need to get to that version as a minimum to get these features.  Hopefully it will start rolling through the regular update channels next month!

 

Power Query Challenge 7 – Phone Words

Yesterday, Nick (one of our forum users) posted a Phone Words challenge in our forum, which looks like a great candidate for Power Query Challenge #7.  I haven't had time to tackle it myself, but if you're up for a Friday challenge, why not give it a go?

1-800-Get-Data converts to 1-800-438-3282

Here's the Challenge (word for word):

Take any Phone # and convert it to all possible Letters aka Phone Words.
Phone # can be input any format (strip out all other characters)

Example:

536-7857 = Ken Puls, etc...

Here's the thread where you can post your solution.

Thanks Nick!

PS, if anyone else has challenges they'd like to post, I'm more than happy to set them up in this series.  Obviously it's been a bit quiet here lately, for reasons that we hope to announce soon!

Do Data Types Matter in Power Query?

One of the things that I find a bit misunderstood in Power Query is whether or not Data Types matter. I mean, I’m sure everyone agrees to some point that they do, but you do know just how much they matter, and why?

Over the weekend, I received an email from one of our loyal students which read, in part:

I am going under the assumption that in Power BI nothing is free. Steps, calculated columns, measures and so on could be cheap but they are never free. It is with this premise in mind that I pose the following theory.

Users should sparingly use Change Type in Power Query. It is fine to Change Type when converting Date/Time to Date, Decimal Number to Whole Number, and others that actually change the value. It is a waste of resources to Change Type from Decimal or Whole Number to Currency. Even if you want the column to look like Currency, you can apply that format in the Data Model view and save one Power Query step.

On the face, this theory seems somewhat reasonable. After all, adding additional steps is bound to add some overhead to the Power Query process in most cases. And let’s be honest, in small models, it may make no difference to you at all. But when things get bigger…

Data Types vs Formats

To understand what Data Types matter, we need to get something very clear right off the bat: Data Types and Formats are not the same thing. Data Types dictate what kind of data you have and determine how much memory is allocated to store a value. Formatting, on the other hand, tell you how you want the values to appear. To see this in practice, have a look at the following values in Power Query, where the Data Type has been set to Currency:

Column set to Currency Data Type

Notice that they only place you see a $ sign is in the header. And see how the decimal numbers do not line up? I can tell you from many years of teaching accountants, that this drives them bonkers. That 1.7 needs to be 1.70! But you don’t do this here, you do that in the Excel worksheet, Power Pivot model or Power BI visual.

They key to remember here:

  • In Power Query, you define Data TYPES
  • In Power Pivot, Excel or Power BI, you define Data FORMATTING

Excel’s Data Types

In the classic days of Excel, we only had four data types that we had to be worried about. Those were:

  • Text,
  • Numbers,
  • Blanks, and
  • Errors

(Although represented by textual patterns like #REF!, #N/A, #NAME?, and the like, they actually count as a different data type.) Today it gets a bit more complicated with Stock and Geography data types, as I blogged about here, but anything else was just a number that was formatted with a number formatting string. Examples of these include:

  • Date: format it as a number, and you’ll see the number of days since Jan 1, 1900)
  • Time: a little trickier, but this is just a fraction of a day where .25 is 6:00 AM, .5 is 12:00 PM and .75 is 6:00 PM. And if all you have is the time, and you format it as a DateTime, you get that time on Jan 1, 1900.
  • True/False (Boolean): while this shows in the cell as TRUE/FALSE, which looks like text, if you multiply them by 1 you’ll see that TRUE equates to 1, where FALSE equates to 0.

But that was Excel, where data types and number formats where the same thing. That’s not Power Query.

Power Query’s Data Types

Unlike Excel, which has a pretty short list of data types, Power Query seemingly has a ton:

  • Numeric Data Types:
    • Decimal Number
    • Currency (Fixed Decimal)
    • Whole Number
    • Percentage
  • DateTime Data Types:
    • Date/Time
    • Date
    • Time
    • Date/Time/Zone
    • Duration
  • True/False (Boolean)
  • Text
  • Others (Binary, Tables, Records, Lists, and more)
  • Any (the dangerous “undefined” type which allows the application to determine the correct data type)

The key to recognize, is that each of these data types is DISTINCT, meaning that each of these is different in some way from every other. (While we won’t get into it in this post, unlike Excel which implicitly converts data from one type to another, Power Query requires explicit type conversion via use of functions like Number.From() or Date.ToText(), which can be frustrating at times!)

For the purpose of this post, however, I want to focus on the first three numeric types: Decimal Number, Currency and Whole Number, and ask the question: Does the Data Type matter?

Illustration Background

Let’s have a look at an example. For the illustration, I set up a Power Query chain that looks like this:

Viewing our Query Chain

The Data table had 100,000 randomly generated [Sales] records that vary between 1.27317262341058 and 100000.017761279, and randomly generated [Customers] that vary between 1 and 1000.

The only change I made in the Whole, Currency and Decimal types was to set the Data Type for the [Sales] column accordingly. In the Customers table, I removed the [Sales] column and duplicates from the Customers column.

I then created 3 quick measures to sum the column up, and dropped them on a Pivot:

Summing up our 3 measures

I don’t think it would surprise anyone here that the [Whole Sum] is slightly different than the [Currency Sum] and [Decimal Sum]. After all, the numbers were rounded at the source before being added up. And so far, we can see that the [Currency Sum] and [Decimal Sum] look pretty much the same. At least until we expand them a bit:

Decimal differences between Currency and Decimal Data Types

The only thing that should be a surprise here is that currency only holds up to four decimals, not two as most people expect. This actually makes sense when you start thinking about foreign exchange transactions, and how they are always carried to four decimal places.

But is that it? Is a Data Type just about rounding? As it turns out the answer to that is both yes and no.

Testing the Model Memory

The next step here was to test the model memory and see how Power Pivot is storing the data. To that end, here’s a little table that shows exactly that:

Table illustrating how Power Pivot is storing the data

Before we get into this, I want to call out something important here. The Data Type that is showing has been read from Power Pivot. Notice that Whole Number shows as such, consistent with Power Query. But the Currency and Decimal tables both show Decimal. As it turns out, Power Pivot doesn’t make a distinction between these two data types. However, the distinction between these two Data Types matters to you anyway, as I’ll explain.

So, what does that all mean?

In the Whole Number table, I had rounded off all the decimals. This left 63,815 unique values. And because Power Pivot compresses based on unique values, it deals with this column very well, resulting in a total of 260.54 KB to store these values.

In the Currency table, I effectively rounded all the values off to four decimal places. This left 99,996 unique values in the table (only 4 values were repeats). Despite this, Power Pivot did a good job of compressing the values in memory, resulting in 390.75 KB to store them.

Then we get to the Decimal column. There are only 4 more unique values than in the Currency column, but the memory takes a colossal 5,234.47 KB to store the values, vs the 390.75 KB of Currency. What the heck is going on?

The answer lies in the fact that Power Pivot has to carry all of those decimal places, and once it does, it can flip to storing data using Hash Encoded memory. That’s a deep topic for another time but suffice it to say that this is a bad thing, as Value encoding is much more efficient. (Hash is generally used for Text, and Values for… you guessed it… values!)

Interestingly, if you round the Decimal Number to 5 decimals you end up with 99,999 unique values in the column and a very minor change to the memory used. But if you round it to 4 decimals, the memory of the Decimals column compresses the same as Currency, and the memory need drops to the same 390.75 KB.

Table illustrating memory requirements of the different Data Types

Why Model Memory Matters

And this is the final part of the equation to me. Not all time is created equal. My users will accept a 2-minute refresh of the data model. They might say it takes time, but +/-30 seconds in a refresh isn’t anything that they’ll get overly concerned with. They’ll click Refresh, go grab a coffee, then come back to work with the file.

But if they click a slicer and it takes 10 seconds to redraw the Pivot Table or Power BI visuals? Watch out! They’ll claim the model is too slow, ineffective, wrong and useless. I’d MUCH rather push resource consumption into the initial refresh in order to build a memory-efficient model that performs well when being used for analysis.

What, wait happened to the Data Types?

To be fair, Power Pivot’s compression mechanism is more about unique values and the length of precision than it is about Data Types. But it’s up to you to choose the correct Data Type to future-proof your model and make sure that the compression algorithms can be applied.

But due to the way Excel has always worked, the way the Data Types are named, and the fact that most modellers don’t have a clear understanding of Formatting vs Data Types… users are more likely to pick Decimal over Currency. I mean, why would I ever format my units as Currency? (In Power BI this is called a Fixed Decimal, although it still shows with the currency symbol.)

We need to recognize that lagging decimals really do happen in business. Let’s look at that Units column for a second. Naturally we never sell a partial unit… or do we? I’ll give you 3 examples of things I’ve seen in the Food and Beverage industry that forever changed my opinion on this:

  1. We had a couple of customers who insisted that they be able to purchase half a muffin. I’m not even kidding here. The killer for me was that it only showed up every 10,000 rows of transactions or so, meaning that the column often got set to Whole Number incorrectly.
  2. The salesperson agrees to provide those 300 units for $5,000. No big deal except that they monkey the price field in your database to make it work and you suddenly you have a sales price of $16.666667. Fortunately, this one usually gets rounded via setting it to a Currency Data Type, as that’s logical. But what if you set it to decimal or left it undefined?
  3. Things go the other way and the customer agrees to a package price of $5,000 for something that usually carries a price of 17.00 per unit. It gets rung into the system with a reverse engineered quantity of 294.1176470588235 to get it exactly right.

The last is the real kicker as the memory suddenly flips from Value to Hash, the RAM needed to refresh the data expands exponentially and brings your model to its knees. And now every slicer click has gone from 1 second to update your visuals to the better part of 10 seconds. And your audience is screaming that the model is “unusable”.

My Recommended Practice

The final step in every query I build which loads to a destination (an Excel Table or the Data Model) is to correctly define my Data Types. This does two things for me:

  1. It ensures that I never accidentally load a column with an undefined Data Type. (In the case of Dates, they load to Excel as numbers, and to the Data Model as text!)
  2. It ensures that I’m cutting down to the minimum number of decimals I’ll ever need for my analysis.

Does it take more processing time? Maybe marginally. But does it future-proof my solution to keep it performant? Definitely. And it ensures the time to do so happens in the refresh, not in when the model is being used for analysis.

Some Final Thoughts

The issues you saw here with memory also affect DateTimes in a big way, as they are quite similar to decimal numbers, where time is represented as the fraction of a day.

The article touches on some things from a much bigger topic: how to optimize a Power Pivot/Power BI data model. I actually teach a full day course on this topic for CPABC, where we cover how the Vertipaq engine compresses its data, why you want to avoid calculated columns and fall in love with measures. We make extensive use of Power Query in this course to reshape tables (applying correct data types along the way) so that they can be consumed by the data model in a more efficient manger. We dissect a poorly built model, test its memory and then rebuild it in stages seeing how our changes made an impact.

And if you’re looking for tools to help with this process… stay tuned to this blog. I’m hoping to have an announcement on that front within the next couple of months.

Solutions for Power Query Challenge 6

This morning I logged in to check the solutions for Power Query Challenge 6 that were submitted and... Wow!  There were a bunch, and some cool variety there.  So now it's time to show you all what I came up with here.

What was Power Query Challenge 6?

The full description and source data can be found in yesterday's post, but in short it was to convert this:

Data table with nested data sets

Data table with multiple data points per cell

To this:

Data in 1NF

Data shown in First Normal Form (1NF)

So how do we do it?

Two Solutions for Power Query Challenge 6

Wait, two solutions?  Why?

As it turns out, I put together two for this one. My first attempt was cooked up to solve the issue on short notice.  Then I built another that seems a bit more elegant.  So I'll show them both, although quickly.

Solution 1 - Splitting Individual Columns and Merging Back Together

The first of my solutions for Power Query Challenge 6 is actually quite similar to what Ali posted in the solution thread.  It basically follows this method:

  • Step 1:
    • Create a Data query that connects to the source data, and load it as connection only
  • Step 2:
    • Create 3 new queries for ItemID, Quantity and Price which
      • Reference the data query
      • Keep the InvoiceID column and the other relevant column
      • Split the relevant column by delimiter, ensuring it splits to rows (not columns as it defaults to)
      • Add an Index column
  • Step 3:
    • Reference one of the Step 2 tables, and merge the other two tables to it, based on matching the Index column in each

So when complete the query chain looks like this:

And returns the table we're after:

The only real trick to this one is that - when you are building the Price query - the Price column will pick the decimal as the delimiter, so you have to force it to a line feed.  So building the Price query would go through the following steps:

  • Right click the Data query --> Reference
  • Select the InvoiceID and Price columns --> Right click --> Remove Other Columns
  • Right click the Price column --> Split column --> By Delimiter
    • Clear the decimal from the Custom area
    • Click the arrow to open the Advanced area
    • Change the selection to split to Rows
    • Check "Split using special characters"
    • Choose to insert a Line Feed character
    • Click OK
  • Set the Data Types
  • Go to Add Column --> Add Index Columns

Resulting in this:

The ItemID and Quantity queries follow the same steps, except that Power Query now correctly identifies the Line Feed as the character to split on.

Solution 2 - Group and Split

While the first solution to Power Query Challenge 6 worked, it left me less than satisfied as it took a bunch of queries.  While effective, it didn't feel elegant.  So I cooked up another solution that uses Grouping.  It's actually quite similar to the first solution that Bill Szysz posted.

The basic method is as follows:

  • Connect to the data
  • Right click the InvoiceID column --> UnPivot Other Columns
  • Right click the Value column --> Split Column --> By Delimiter --> OK

Following the steps above gets us to this state:

To unwind this, we group it:

  • Go to Transform --> Group By
    • Group By InvoiceID, Attribute
    • Aggregate a "Data" column using the All Rows operation

Grouping using the All Rows feature

At this point, we need to number these rows, so I just busted out the pattern to do that from our Power Query Recipe cards (recipe 50.125).

  • Go to Add Column --> Custom
    • Column Name:  Custom
    • Formula:  =Table.AddIndexColumn( [Data], "Row", 1, 1)
  • Right click the Custom column --> Remove Other Columns
  • Expand all fields from the Custom column

Leaving us with this data:

Data Grouped with Numbered Rows

The next step is to Pivot it:

  • Select the Attribute column --> Transform --> Pivot Column
    • Choose Value for the Values
    • Expand the Advanced arrow
    • Change the Aggregation to "Don't Aggregate"
    • Click OK
  • Select the "Row" column and Remove it.  (Yes, it was needed to unpivot this correctly, but now adds no value.)
  • Set the data types
  • Load it to the desired destination

At this point, the query (again) looks perfect:

The desired output

Now, I must admit, this felt far more professional and left me feeling good about it.

Which Solution to Power Query Challenge 6 is Better?

Naturally, solution 2 is better.  It takes less queries, and looks way cooler.  Right?  Not so fast...

The real question is in the performance.  And for this one I thought I'd test it.  But I needed more data.  I expanded the set to 11,000 rows and then used a tool we're working on to time the refreshes.  Privacy was left on, and all times shown are in seconds:

  • Solution 1:  1.43, 1.48, 1.11, 1.27  Avg ~1.32 seconds
  • Solution 2:  2.77, 2.65, 2.63, 2.68  Avg ~2.68 seconds

I'll be honest, this surprised me.  So I went back and added the GroupKind.Local parameter into the Grouped Rows step, like this (as that often speeds things up):

Table.Group(#"Changed Type1", {"InvoiceID", "Attribute"}, {{"Data", each _, type table [InvoiceID=number, Attribute=text, Value=number]}}, GroupKind.Local)

The revised timing for Solution 2 now gave me this:

  • Solution 2A:  2.54, 2.49, 2.56, 2.61.  Avg ~2.55 seconds

So while the local grouping did have a small impact, the message became pretty clear here.  Splitting this into smaller chunks was actually way more efficient than building a more elegant "all in one" solution!

My solution (including 5,000 rows of the data), can be found in the solution thread here.

Power Query Challenge 6

Are you ready for Power Query Challenge 6?  In this challenge we'll look at splitting nested data sets into a nice table in First Normal Form.

Let's take a look at Power Query Challenge 6:

Where did the challenge come from?

The inspiration for this challenge came from last week's Self Service BI Boot Camp that we hosted in Vancouver, BC (you should have been there, it was awesome).  At the outset, we talked about how to identify if your data is in First Normal Form (the format that is ideal for a PivotTable), and I showed this data set:

Data table with nested data sets

Data table with multiple data points per cell

Notice how the invoice has multiple ItemID, multiple Quantity and multiple Price fields per cell?  That's not good at all.

What we Really Need - Data in First Normal Form

As I explained in the Boot Camp, it is essential that the data source be in First Normal Form in order for a PivotTable to consume it.  What does that mean?  It means that it needs to look like this:

Data in 1NF

Data shown in First Normal Form (1NF)

Notice that in this case the data is atomic - it only has one data point per cell.  In addition, there is now one complete record per row.  The InvoiceID shows on every row now, and each data point has been correctly split up and applied to them.

So what's the thrust of the Power Query Challenge 6?

Well, as it turns out this is a bit tricky.  There are a few issues at play here:

  • The data is separated by line feeds within the cells
  • There are a different number of line feeds in each row
  • At least the number of line feeds is consistent for each cell in the entire row though!

So the challenge is this: break the table apart so that the data is in First Normal Form like the second image.

You can download the data source file (and post your solutions) in our forum here.  I'll give you my version tomorrow.

Power Query Challenge 5

It’s been a while since our last challenge, so hopefully you’re ready for Power Query Challenge 5!  This is one that came up as I was teaching one of our 3 day Excel BI Bootcamp courses for a client, (similar to this public course!) which actually made it doubly tricky.  Why?  Because I needed the person to be able to both understand how I approached the task, as well as be able to maintain it after I left.  I didn’t want to just kick up some magic voodoo and walk away, as that isn’t really fair.

The issue for Power Query Challenge 5

The challenge in this case was to allocate a fixed amount for a series of account IDs.  They provided a starting ID, an ending ID, and a value.  Sounds easy so far right?  Here’s the problem… the key part of some of those IDs were purely numeric, but the key part of others were alphanumeric!  An example of the source data:

Can you make a series from one to the other?

Can you make a series from one to the other?

And the required output:

Desired Power Query Output

Here is the data we need to create

The rules – as they were explained to me – were as follows:

  • The first 7 digits won’t change for the series
  • If there is no “To” value, then we only need the single value
  • If the final character is text, it will never exceed Z. (i.e. we’ll never go from 10A to 11C)

How would you solve Power Query Challenge 5?

My first thought was that I’d have to reach to a custom function to do this, but again, I didn’t feel that was fair to my audience with only a day of Power Query training under their belt.  (With Dimensional Modeling and DAX to cover in our workshop, we don’t have time to take people deep into custom functions.)  After playing with it for a bit though, I was able to work past that problem and come up with a UI driven solution which works.  (Having said that, how you decide to solve this is really up to you!)

Now last time, I posted my solution the same day, along with my steps.  The result was that we had less engagement, possibly because that gave too much away.  So this time, I’ve created the thread for you to post your solutions in the forum, but I’m going to hold off on posting my approach until tomorrow.

Good luck!

Creating Two-Tiered, Multi-Frequency Period Cycles

Sometimes, what should be easy code to write has unexpected pitfalls resulting in opportunities for new learning. I needed to generate a one or two-tiered stream of date values with multi-frequency period cycles, monthly vs annual.

For example, after installing a piece of equipment I wanted to schedule 12 monthly calibrations, beginning with the month after installation, followed by 3 annual calibrations, beginning 6 months after the last monthly calibration.

This is a table of schedule parameters that defines a few sample scenarios:

Example table outlining desired multi-frequency period cycles

Create a Simple List using List.Generate()

My plan was to create generic schedule scenarios, based on specific types of equipment and use period parameters relative to the installation month.

My first thought was to use Power Query’s List.Dates() function to create the sequence of dates, but List.Dates() only increments by combinations of days/hours/minutes/seconds. Not helpful when you need to increment by months. Consequently, I turned to the List.Generate() function to generate the list of periods.

First I wrote some M-Code to test the concept using hard-coded parameters:

Query: JustCreateList
M Code:

M code for JustCreateList

And the results were this:

Results for JustCreateList query

Great! That was easy. If I can make a list and add it to each row of a table, then I can expand each list and build cycle dates.

Use List Values to Create Dates

Query: BuildDates_HardcodedParams
M-Code:

M code for BuildDates_HardcodedParams

And the results are:

Results for BuildDates_HardcodedParams query

So far, so good. (I may leave early today.)

Use Column Values as List.Generate() Parameters

Next, let’s use column values to drive the Lists. What could be easier, right? Here’s one of my many failed attempts:

Query: BuildDates_ColValParams_V1
M-Code:

M code for BuildDates_ColValParams_V1

And the results are:

Error generated by BuildDates_ColValParams_V1

Wait! What??? Where are my lists? And, what “field access”?

I Discover “Internal Context” and “External Context”

I won’t go into all of the iterations I tried to get that darn code to work. Suffice it to say that I tried every conceivable variation of “this”, “each”, “_” and anything else I could think of. All to no avail… Same error message.

Now, I could have built a separate function (which I did, and it worked):

Query: fnCreateList
M-Code:

M code for fnCreateList

I also could have embedded a function within the M-Code (which I did, and it worked):

Query: SingleTier_EmbeddedFunction
M-Code:

M code for SingleTier_EmbeddedFunction

BUT, I wanted a concise, elegant solution. I wanted to just put the List.Generate() function in my code and reference the table columns, just like I would with an Excel function (which I did and…well…you saw the results). I needed help.

So, I posted on Ken’s forum. And who do you think volunteered to assist me? Miguel! (Thank you, Miguel.) Once he established that my interest in referencing column values in List.Generate() had gone way past idle curiosity and quest…all the way to “mission from god”, he gave me a working example and a concise explanation.

Handling Internal and External Context

Because the List.Generate() function is essentially recursive, it has to look at its own sequential results and test them against its internal limits. It needed explicit code to indicate when it should use external context (table references) and when it should use internal context (intermediate function results). I won’t pretend that I understand the “why” of the required syntax, but I quickly picked up on the “how”.

Here’s the working M-Code that uses column values in the List.Generate() function to create the first date sequence:

Query: SingleTier_EmbeddedInnerContext
M-Code:

M code for SingleTier_EmbeddedInnerContext highlighting (listval)

By simply creating a dummy parameter (listval), the List.Generate() function automatically populated it with its own sequential results and it understood that the column references pointed to the Table! I could have named that parameter “moonchild” or “Sue” or anything else. All that mattered is that it had something to populate.

Now, my results were valid:

Results for SingleTier_EmbeddedInnerContext query

Over the first major hurdle and it was a BIG one!

Combining Lists to Generate Two-Tiered Multi-Frequency Period Cycles

The rest of the solution was relatively easy after that. I needed to allow for a second tier of cycle dates.The basic steps were:

  1. If Tier1 begins after the Install Date, make a placeholder for the Install Date… a period zero.
  2. Create the Tier1 date sequence.
  3. If there’s a Tier2 sequence, create it.
  4. Sequentially append the constructed cycle date sequences.

Since that involves M-Code not covered in this blog post, I’ll just post the final solution:

Query: TwoTier_EmbeddedInnerContext
M-Code:

M code for TwoTier_EmbeddedInnerContext highlighted (listval)

And here are some of the results of my table containing the desired two-tiered, multi-frequency period cycles:

Final table displaying our multi-frequency period cycles

Power Query/Get and Transform has steered my career into new, creative, challenging, and rewarding directions. I’d like to thank Ken for giving me the opportunity to share what I learned about internal/external context and my learning process. I welcome any and all questions and comments.

-Ron Coderre