Calculate Start and End Dates

I got an email from a reader this morning who asked how to calculate start and end dates for a given employee when they have had multiple terms of employment.  Since it's been a while since we've had a technical post on the blog, I thought that this would be a good one to cover.

The Challenge

In this case (which you can download here) we are given the table shown below on the left, and we need to create the table shown on the right:

image

As you can see, John's start date needs to be listed as Jan 1, 2013 and his end date needs to be listed based on the last date he worked here; Oct 31, 2016.

How to Calculate Start and End Dates using Power Query

My first thought was "we'll need a custom function to do this", but as it turns out, there is a a MUCH easier way to accomplish this, and it's 100% user interface driven as well.  I'm virtually certain that the performance will also be much better over larger data sets as well (although I haven't specifically tested this.)

Let's take a look:

Step 1: Connect to the data

This is pretty easy, just select the table and use Power Query to connect to the data:

  • Excel 2010/2013:  Power Query --> From Table
  • Excel 2016: Data --> From Table/Range

We'll be launched into Power Query and will be looking at our short little table:

image

Step 2: Calculate Start and End Dates via Grouping

The trick here is to actually use Power Query's Grouping feature to calculate the start and end dates.  To do this:

  • Go to Transform --> Group By

The dialog will open and is already offering to group by Name, which is what we need.  Now we just need to select the grouping levels.  The first is going to be our Start Date, so we'll rename it as such and change to calculate a Min of the From column:

image

The effect here is that this will provide the lowest value from the "From" column for each employee.  A perfect start.

Next, we need to add a new grouping level to get the End Date.  To do that:

  • Click Add Aggregation
  • Configure the new column as follows:
    • New column name: End Date
    • Operation: Max
    • Column: To

It should look as follows:

image

And believe it or not, you're done!

image

How this works

The key here is that the grouping dialog in Power Query works for all records in the group.  This is really important, as the first column has no bearing on subsequent columns… if it did, we'd get the max for the first record, which is not at all what we'd be looking for.  Instead, the Group By will restrict to find all records for John, then will pull the Min and Max out of the remaining three rows, returning those as the values.

The other thing that is worth noting here is that the order of the source data is irrelevant.  We could have provided either of these options and the answers would still have been calculated correctly:

SNAGHTML19ec2a56

Final Notes

It's also worth mentioning that this technique to calculate start and end dates will also work in both Excel and in Power BI, as the feature set is identical between the two products.

Sometimes things that look hard, are actually really easy when we have the right tools in our hands, and this happens to be one of those situations.  Smile

Power BI Boot Camp

I'm pretty excited to announce that we have our very first Power BI Boot Camp coming up in Vancouver, BC on Feb 22-24.  This is something that I've wanted to do for a while, and it's going to be awesome!

image

The boot camp is going to be 3 full days of hands on Power BI, and is intended to get you up and running, building cool solutions that will impact your business.

Why do you care about the Power BI Boot Camp?

Okay, I get it… you're an Excel person, and while you've heard of Power BI, you're not quite sure if you need it or not, and certainly don't know if you can justify the cost of a 3 day Power BI Boot Camp...

I honestly believe that the answer to both questions is an emphatic yes!

Top 5 reasons you need to learn about Power BI:

We'll cover the "how do I convince my boss" near the end of the post, but first, let's talk about what Power BI is, and why it is relevant to you.

  1. A major reason that companies are behind in their Business Intelligence development is that they are using outdated versions of Excel and cannot (or are afraid) to upgrade.   Power BI Desktop is a separate program that sits completely outside Excel, meaning that you can run the most up to date business intelligence tool DESPITE the version of Excel you're stuck on.  I'm looking at you Excel 2003 & Excel 2007 users. Winking smile
  2. You can do a TON of stuff with Power BI for free.  Yes, there are services that cost a nominal fee, but you'll be surprised at how much you can do for absolutely nothing.  The investment you make here will pay back in spades as you begin to automate tasks that were taking you huge amounts of time, and get the ability to deliver true business intelligence to your stakeholders.
  3. Power BI's data collection experience is based on Power Query, and it's modelling experience is based on Power Pivot.  So if you have those skills from your Excel journey, they will be totally relevant in Power BI.  And if you don't?  The new skills you learn at this course related to these areas are portable back to Excel as well.  (So again, if you're stuck on an older version of Excel, this is your chance to upskill BEFORE your Excel is updated.)
  4. If you've ever tried to share Excel dashboards, you know how hard it can be to do so and maintain them.  And if your end users want their reports accessible and interactive on mobile?  Forget it.  But Power BI solves these issues, including delivering interactive reports to your mobile phone (yes, even your non-Windows phones!)  Rest assured, however, that if your company is cloud averse, this tool is not dead to you, and it's still worth coming.
  5. You'll be learning in a small class environment from a world class expert who cares about delivering value.  I've been there.  I know how hard your job is.  And I want to help you become more effective.  That is what my company is all about, and I'm very serious about it.

What will the Power BI Boot camp cover?

To be completely honest, the better question is probably "what won't we look at?"  I've got all kinds of stuff cooking for this.  When I first started drafting the Power BI Boot Camp outline I was wondering how I'd ever fill 3 full days.  Now I'm trying to figure out what to cut so that I can fit it all in.  Keep in mind that it's not going to necessarily be in the order below, as some topics are MUCH bigger than the summary I'm providing, but here's a bit of an overview…

Where it all starts…

We'll start by building a basic Power BI solution built on an Excel file.  Some minor data modifications, some cool visuals and then publishing it to the cloud service, we'll walk through the most basic of Power BI journeys.  I want to do this so that you can see just how easy it is to build a cool solution that works and interacts, like this one.  (Go ahead... click any of the bars and see how it cross filters in your web browser!)

More data, more transformations, and some modelling…

From there we'll start to dive deeper, teaching the techniques that allow you to build more robust models and solutions.  We'll spend a bunch of time looking at different data sources, as well as some of the more complex methods for fixing bad data to make sure that you can use it in your solutions.  We'll also show you how to link these tables together so that they filter nicely across multiple visuals.

image

Let's get visual…

Power BI is all about getting visuals with your data.  It has a big collection of visuals, as well as an ever growing gallery of custom visuals as well.

While I won't promise that we'll look at each different visual in the product (we only have 3 days), we are going to look at a lot of them, including some of my favourite custom visuals.  I want to show you how to create them, format them, and what data you need to get the best of them to work.

image

We're building dashboards to tell a story here, so I want to try and get that as close to real life as possible for you.  From bar, column and line charts, to maps and bullet charts, I really want to get you comfortable with these, as well as how to control the interactions between them.

And speaking of Dashboards… do you know the difference between a report and a dashboard in Power BI?  That will get clearly explained here as well.

Getting the numbers right…

We'll get into working with the DAX formula language, and understand how to add and override filter context to make your formulas show exactly what you want when you filter any visual.

How about Calendar intelligence?  I have a huge passion in this area too.  The Power BI Boot Camp won't just teach you how to build your own calendar table, you'll also leave armed with a collection of DAX calendar intelligence measures.  Those measure patterns are super important as they'll allow you to report results correctly when you ask for Month to Date data or Month to Date for 2 months ago.

image

Sharing is caring!

We'll look at sharing via the web first, both for internal and external users on an invite-only basis.  But even more, I'm also going to walk you through embedding your reports in a web page so that you can show the world how awesome your company is.

In addition, you'll want to bring your phone with the Power BI app installed.  Why?  Because I'm going to make sure you get set up with your reports and dashboards delivered to your phone in a mobile optimized fashion where you can even annotate your reports by hand.

image

And if you'd prefer not to be cloud based?  We'll look at creating Power BI templates that you can distribute within your organization as well.

You know… while we're talking about sharing… have you ever sent a report to a manager only to have them come back with a one of those "can you show me this?" type questions?  Let's try and solve that little problem by allowing our manager to ask their own questions using natural language queries.  In other words, how cool would it be if your manager could just type "Bar chart total stays by city" and it did it, even if you've never built that report?  Because they can.

image

And what about updates and security?

Wait… how do I keep this stuff up to date?  And how do I ensure that only the right people are looking at their data?  Well of course we'll cover those topics!

Not only will we look at scheduling refresh for cloud hosted data sources, but also your on-premises data sets via gateways and 3rd party applications.  Ooohh… connecting data to the cloud… can I hear anyone saying "Security!"  Well yes, so let's talk about both encryption and also how to restrict the data sets to show only relevant rows to the correct people.  (And let's face it, you will want that last part even if you keep your data local!)

So why Ken's Power BI Boot Camp then?

My goal is to deliver real value to you.  Anyone who has been to any of my courses in the past knows this:

  • It's virtually impossible to find someone who has more passion for what they teach than me
  • We limit the class size so that it doesn't get too big.  A huge part of the reason for this is because…
  • My style is to roam the room and help people keep up with the content so that everyone learns
  • I provide completed examples for all the work we go through, with catch up points along the way, just in case the above isn't achievable
  • I provide a ton of supplemental resources as well

I've been to classes before, and I know that it can be somewhat difficult to apply the course materials to your own work.  For that reason the Power BI Boot Camp is designed like all my courses: We will build solutions like you'd expect to do so in the office, exposing the pitfalls where the software doesn't work as you'd logically expect.  This is intentional, as we teach you not only how to do things correctly, but also how to debug problems as they happen.

What skill set do you need?

Do you need experience working with Power BI in order to come to the Power BI Boot Camp?  Heck no, that's what this course is about!  And while some Excel knowledge is helpful, you're certainly not going to see a test on Excel functions here.

Ultimately what you need is a hunger to learn; a desire to change your reporting world.  If you can bring me that enthusiasm, I'll teach you what you need to know in order to make that happen.

Making the Case to Your Boss

I managed an accounting department in my former life, so I totally get this.  We're all about data here, so here's the up front details:

The cost of the course is $1,495 plus GST.  It includes your breakfast and lunch all 3 days, but you need to look after your dinner and hotel room (if necessary).  So now you can work out the total cost.

I'll let you do the math and work out how many hours per week you need to save yourself in order to break even.  If you spend a significant amount of time cleaning up data to be used in Excel, you'll find that you'll break even on this alone.

Honestly though, the true value proposition of this course isn't the ability to break even on a labour basis, it's about the opportunity costs.  Opportunities like:

  • Identifying programs that are costing your company money which don't earn returns.
  • Finding the best paying leads to pursue for greater sales.
  • Delivering value to your stakeholders when and where they need it.
  • Reducing inventory during slow seasons to reduce carrying costs.
  • Highlighting key metrics that are under performing.
  • Building solutions to blow the minds of your clients or stakeholders.

Every business is different, but ultimately it's the opportunity to get better information into the hands of those who make the decisions, be it you or your boss.  That benefit FAR outweighs the price we charge, and will break even for you VERY quickly.

I hope to see you at the course.  It's going to be awesome, and it will change your data life forever. 🙂

Register for the Boot Camp here.

Make the Sample Binary file path dynamic

In this post we will explore how to make the Sample Binary file path dynamic when combining files using the new Combine Binaries experience in Excel and Power BI Desktop.

Sample Files

If you'd like some sample data files to play with, you can download them here.

Why do we even need to talk about this?

I've covered the new combine binaries experience in my last couple of posts on:

But one thing I didn't dig deep into was the Sample Binary file path, and the fact that it actually gets a hard coded file path.  To replicate the issue, here's how I set up my quick test:

  • Open the application of choice (I'm going to use Power BI Desktop here)
  • Get Data (create a new query) From File --> From Folder
  • Browse to the folder path and click Edit

In this case I've browsed to following file path, which only contains a single file (so far):  C:\Users\KenPuls\Desktop\CSVs.  And here's what it looks like in Power BI Desktop or Excel:

SNAGHTML1854513

And now I click the Combine Binaries button at the top right of the Content column, resulting in this:

SNAGHTML18699e5

Now, as I discussed in the first post in this series, we know we can modify the "Transform Sample Binary From…" step to see those changes in the final output.  So what's the issue here?  I'm going to right click the Sample Binary and choose to view it in the Advanced Editor:

image

Note:  I did add a line break between the 3rd and 4th lines, so read that as one.

The key part to notice here is that the file path, despite being in the original CSVs query, is also hard coded into this query TWICE.  That makes it very difficult to port this from one location to another, as simply changing the file path in the CSVs query is not sufficient, it will still break upon refresh.  It's for this reason that we need to make the Sample Binary file path dynamic: so that we only have to change it in one place.

How to make the Sample Binary file path dynamic

To start with, I'm going to throw this solution away and start over completely.  And again, while I'm using Power BI desktop to illustrate the method to make the sample binary file path dynamic, this will work the same in Excel with only one exception. (Once you have the new combine binaries method in Excel, anyway.)

Step 1 - Launch the Power Query editor

To get started, I'm going to launch myself into the Power Query editor, ideally without creating a new query.  This is easy to do in Power BI Desktop, simply go to the Home Tab and click the top of the Edit Queries button.  You'll be launched into the editor without creating any new queries:

image

In Excel, if you've never opened the Power Query editor before, there is no way to get in there without creating a new query.  You'll need to create a New Query --> From Other Sources --> Blank Query.  Then you can expand the Queries pane on the left, right click Query1 and Delete it.  Silly, but that's pretty much the way to accomplish this.  (If you have created other queries, you just need to edit any one to get into the editor, as it won't create a new one for you.)

Step 2 - Create a Parameter for your file path

Before we get started, we need to create a single place to update our file path.  For this we'll use one of the Power BI/Excel parameters.  To do that:

  • Go to Home --> Manage Parameters --> New Parameter
  • Set up the Parameter as follows:
    • Name:  FolderPath
    • Required:  yes
    • Type:  Text
    • Current Value:  <your file path>  (mine is C:\Users\KenPuls\Desktop\CSVs)
  • Click OK

This will result in a rather simple little parameter that looks like this:

image

Step 3 - Create a new query against the folder

Now that we have our parameter, we are ready to actually create the query we need against the folder.  So let's do that now.  (Oh, and if you're working with Excel, just stay in the Power Query editor - no need to go back to Excel first.)

  • Power BI Desktop:  Home --> New Source --> More --> Folder
  • Excel:  Home --> New Source (near the end of the ribbon) --> File --> Folder

When prompted for the folder path, instead of clicking Browse, click the ABC on the left and choose Parameter:

image

It will automatically populate with the FolderPath parameter and, upon clicking OK, will take you to the preview window where you can click OK (Power BI Dekstop) or Edit (Excel.)

Step 4 - Combine the Binary Files

Now we'll combine the binary files… all one of them.

  • Rename the Query to "Transactions"
  • Click the Combine Binaries icon on the top right of the Content column

Just a quick side note here… in the current build of Excel we don't see this, but in Power BI desktop, we are taken to this window where we can control how the data types are determined:

image

This is pretty cool, and I assume it will be coming to Excel in the future too.  If your data types are consistent most of the time, you generally won't have to worry about this.  If, on the other hand, you've got strange things that happen, (say that once every 10,000 transactions you get a fractional sales unit,) you may want to choose the "Entire Dataset" option to avoid truncated decimals.

For now, just click OK with the default to blow past this dialog.

Step 5 - Make the Sample Binary file path dynamic

And finally, here we are, it's time to make the magic happen and actually make the sample binary file path dynamic.  To do this we're going to make a couple of small edits to the Sample Binary's M code.

  • Right Click the Sample Binary --> Advanced Editor

NOTE:  Be sure not to accidentally hit the Sample Binary Parameter1… we don't want that one!

Now, first thing to notice is that the very first line no longer points to a hard coded file path, instead it points to our parameter.  That's very cool as a single update to the parameter means that both this query and the original one to pull the files from the folder will be changed when our parameter is updated.  One place to fix them both.

Now, there are still issues here, but I just want to do a bit of cosmetic cleanup first.  The second and last lines still start with the name of the file.  This is just something inside the M code that you'll probably never read again, but it's still good practice to clean it up:

image

I've change both highlighted parts to read "SampleFile" to make the code a bit shorter.  And now I can focus on the real issue:

image

The highlighted portion above still holds both the hardcoded file path and file name.  What this means is that even though the folder path is dynamic, if I change the parameter and update the file path on a new computer, it will still be pointing to the older source.  That is far from ideal.

Interestingly, it's really simple to fix when you know how.  You simply grab everything from the [ to the ] and replace it with 0 so that it looks like this:

image

To show that it updates properly, I'm going to click Done, and throw a new file in the folder called "Feb 2008.csv".  Since this is lower in the alphabet, we'd expect it to show up before "Jan 2008.csv", and it does when I refresh the preview window:

image

The End Result

The biggest reason I want to make the sample binary file path dynamic is the scenario where I email the solution to someone else, and they have a different file path to the data files.  In this case they now only need to edit the project, update the FolderPath parameter, and everything will work again.

Why Not Edit in the Formula Bar?

In truth, you don't actually have to go into the Advanced Editor to update your code.  I made a cosmetic fix in there, as I actually do go back and read code later.  Since the default step name leaves a red herring in the code, I wanted to nail that down.  If you're never going to read the code though, it's cosmetic.

In effect, all that is really necessary is to replace the code from [ to ] with 0 as we did above:

image

The problem is that if you do this here, it automatically kicks off 3 new steps that have to be deleted:

image

Granted it's not the end of the world, but since I want to clean up the code anyway…

Final Thoughts

You're not alone if you think this should be unnecessary.  In my opinion, this dynamic nature should be standard, and I think it would be an easy fix for the team to implement.  Marcel even posted a suggestion to modify this feature here, which you should consider voting for.

Combine Excel Files

If you've wanted to use Power Query to combine Excel files with a single click - like you could for TXT and CSV files - the feature is finally here*.  The Combine Binaries feature has been re-written (as I discussed yesterday), and it now allows for easy combination of Excel files.

* This new feature in the O365 Fast Insider preview today and in Power BI Desktop's November update.  I'm expecting to see this show up in the January Power Query update for Excel 2010/2013.

Just a quick caveat, I'm going to cover the items specific to the Combine Excel Files experience here, and not dive deep into the methods of how to modify and adjust the created queries.  For that reason, if you haven't read yesterday's post, I highly recommend doing that first.

The Classic Combine Excel Files Experience

For anyone who has tried to combine Excel files in the past, you'll be familiar with this.  You create a new query to pull From File --> From Folder and navigate to the folder full of Excel files.  Then you hopefully click the Combine Binaries button:

image

And you get this:

image

Obviously that's not so helpful.  The answer to deal with this was to go to Add Column --> Add Custom Column and use the formula =Excel.Workbook([Content]) to convert the data into tables. Then some filtering and more user interface driven work was required in order to get your data.  The experience was a bit painful, and certainly not for beginner users.

The New Combine Excel Files Experience

To start with, I'm going to take two Excel files with sales transactions in them and combine them easily using the new combine Excel Files experience.  Here's the file characteristics:

  • Each file has a small set of data (2-3 rows)
  • The data is stored on Sheet 1 in a table
  • Both files are stored in the same folder

To get the data, I'm going to do the following:

  • Get Data using a new query "From File --> From Folder"
  • Browse to and select the folder
  • Click Edit at the preview window

As you can see, we've got a couple of Excel files here:

SNAGHTML9cbff3d

So I'll click the Combine Binaries button (at the top right of the Content column.)

And where this would have triggered an error in the past, it now kicks out a preview:

image

And what happens next depends on what you select (which is why they are numbered above.)

Combine Excel Files - Method 1

For the first kick at this, I'll select the Sample Binary Parameter 1 folder icon (indicated by the number 1 in the Combine Binaries preview.

Nothing will ever show in the preview window, but upon selecting the folder I can click OK, which will result in this:

image

As I showed in yesterday's post on the new Combine Binaries Experience, we now get a bunch of new queries and a few steps along the way.  The end result of this query, however, is a listing of all the worksheets, tables and ranges in each workbook.  This is the view that will allow you to go back and granularly pick out what you need and transform it.  In other words, this is kind of the detailed hard core view which was the equivalent of writing the custom columns that we used to have to do.

Because this is so similar to the classic method, I'm not going to do much more with this.  The real point was to expose that selecting the folder in the preview window will bring you to this setup.

Combine Excel Files - Method 2

Method 2 revolves around selecting the table in the preview window; in this case the Sales table.  When we select that branch in the preview window we WILL get a preview of the data:

image

And when we click OK, we actually get the data combined nicely:

image

As discussed in the previous post, if we wanted to modify and/or change:

  • The Source columns (Source.Name or others): We modify the Removed Other Columns1 step in this query.
  • The data before it is imported and combined: We modify the Transform Sample on the left side.

Now this section is MUCH easier than what we used to have to do!

Combine Excel Files - Method 3

But what if your data is not in an official Excel Table?  What if it's just data in a worksheet range?  Well, then you select the worksheet icon instead:

image

And the results are virtually identical to the previous method:

image

Why does that work?  It works because the Transform Sample is smart enough to automatically promote the first row to headers, so it doesn't actually need the table.  On the other hand, if that data wasn't in the first row, you may need to go back to the Transform Sample and tweak it to do what you need (like remove top rows, promote rows to headers, delete columns, filter, etc.)

Caveats When Trying to Combine Excel Files

This experience will work well for many things, but as always there are some caveats.

Single Object Only

The default experience here is to combine based on the object you select.  In other words, if you select Sheet 1, it will combine Sheet 1 from each file.  It won't combine all sheets in the file based on the Sheet 1 code.  If you want to do that, you need to go back to Method 1 above, filter to the objects you want, and deal with them using classic import methods.  (Unless you try to get real techy and build the function then repurpose it to use in that table - something I have not done yet.)

Preview Based On First Item In the List

The preview and import templates are based on the first file in the list.  You can see how that affects things when I throw a new file into my structure that has different worksheet and table names:

image

While the two Sales workbooks have Sheet1 in them, this one doesn't, making it impossible to use this function to combine the first worksheet in each file. (The template would be based on Dec and would return errors for the other two files.)

If the order is important, you'll need to sort the file list first to get the correct file to the top before you trigger the Combine Binaries function.

For the record, I have sent an email to the Power Query team suggesting that it would be nice to get the option to pick the file here which the template should be based upon.  That would make this much easier to work through, I think.

Inconsistent Columns Are Bad News

Let's say that you have two files with tables in them who have different column names (or quantities).   The transformations generated will actually deal with this correctly, resulting in a column of tables which have different headers.  All good so far, but when the main query gets to the last step, it expands the column of tables based on the headers for the table in the first row only.  This is actually a standard thing, so not a surprise, I just want to make sure you don't think this is a holy grail that will solve the differing column issue.  (We have more work to do in that case.)

Overall Thoughts

At the end of the day, I have to say that this is a pretty welcome addition.  I'm still not a fan of the names of the generated queries, and I would add something to change the template file, but I think this is going to make it a LOT easier for people to import and transform Excel files than it has been in the past.

New Combine Binaries Experience

One of the updates in the latest Excel 2016 Insider preview build is a new Combine Binaries experience for Power Query.  This is both good and bad, and in this post we'll take a detailed look at both sides before it hits you in your Excel implementation.  (With this new feature in the O365 Fast Insider preview and with it already existing in Power BI Desktop's November update I'd expect to see this show up in the January Power Query update for Excel 2010/2013.)

A Little History Lesson

Before we start, let's get everyone on the same page.  If you didn't know this, the Combine Binaries feature is one of the coolest things in Power Query or Power BI desktop, as it let you do a one click consolidation of "flat" files (txt, csv and the like).

How Combine Binaries (Used to) Work

In order to make this work, set up a new query to get data From File --> From Folder, browse to the folder that contained your files, select it and clear the preview window.  At that point all the magic happens when you click the Combine Binaries button at the top of the Content column:

SNAGHTML56229e0

That "one-click" experience would take your source folder and add four steps to your query:

image

And it puts it all together nicely.  So cool, so slick, so easy.

Why did we need a new Combine Binaries experience?

So the first real question here is "Why even modify this experience at all?"  As it happens, there were a few issues in the original experience:

  1. You lost the original file name details.  If you wanted to keep that, you needed to roll your own custom function.
  2. Headers from each file were not removed, so you'd have to filter those out manually.
  3. It could only work with flat files (csv, txt, etc..) but not more complex files like Excel files.

So for this reason, the team decided to build a more robust system that could deal with more files and more data.

The New Combine Binaries Experience

So let's look at what happens in the new experience.  We start the same way as we always did:

  • Set up a new query to get data From File --> From Folder
  • Browse to the folder that contained your files, select it and click OK
  • Click Edit at the preview window
  • Click the Combine Binaries button at the top of the Content column

At this point a whole bunch of crazy stuff now happens, and your query chain looks totally different than in the past:

SNAGHTML610c47b

There are 3 main areas that are different here:

  1. A whole bunch of new queries and parameters with very similar names,
  2. The original source name is retained
  3. Different query steps than in the past

If your first reaction to this is being overwhelmed, you're not alone.  I'll admit that my first reaction to this was not a happy one.  There is a huge amount of stuff injected in the file, it's difficult to follow the relationship in the M code (even if you do know how to read it all), and it isn't intuitive as to what do to with it.

At the end of the day, the biggest change here is that things happen differently in the past.  In the original implementation of the Combine Binaries feature set, it combined the files first, then applied some other Power Query steps.

The new method actually examines the individual files, formats them via a custom function, then appends them.  This is very different, and it actually gives us a bit more flexibility with the files.

What's the end effect that will be different for you?  Simply this:

  • More queries in the chain (maybe you don't care), and
  • The file name is preserved by default (which was not there in the past)

SNAGHTML61d909b

Now if you are good with everything here, then no worries.  Continue on, and you're good to go.  But what if you want to make changes?

Making Changes in the new Combine Binaries Experience

The biggest challenge I have with this new implementation is that if you are a user hitting this for the first time, how and what can you change?

Changing or Preserving File Details

What if I wanted to keep more than just the file name… maybe I wanted to keep the file path.  You'll be surprised, but this is actually pretty easy now.  Since Power Query wrote a custom function for us to import the data, all the file properties were already preserved for us.  But it also made a choice to keep on the source file name.

To modify this, we just look at the steps in the Applied Steps window, click the gear next to "Removed Other Columns", and choose the ones we do want to keep:

SNAGHTML623e100

So in this case, I can actually uncheck the box next to Source.Name and remove that from the output.  (We want to keep the very last step, as that is what actually appends the data from the files).

Also… after looking at the Applied Steps window the Renamed Column1 step was only in place to avoid a potential name conflict if you had a column called Name (there is a really good reason for this which I'll look at in another post.)  In this case it is totally unnecessary, so we can just delete it.

So now our code looks as shown below, and the output looks similar to what we would see in the old experience:

SNAGHTML629962e

Notice I said that it looks similar - not that it is identical.  This is actually better as there are no headers repeating in the data at all, so we don't need to get rid of those.  That is an improvement.

Modifying the Import Function

Now, that's all good for the details about the file, but what about the data?  What if I only want records for department 120?

To understand the impact of this next piece, we need to understand that there are two ways to combine data:

  1. Bring ALL the data into Excel, then filter out the records you don't want
  2. Filter out the records you don't want, and ONLY bring in the ones you do

Guess which will consume less resources overall?  Method 2.  But the challenge here is that Power Query encourages to use Method 1.  You're presented with a full table that is just begging you to filter it at this point… but you are better to deal with it using Method 2… only it's not obvious how to do that.

The kicker here is that the logical flow of Power Query's Applied Steps window has been interrupted with the "Import Custom Function1" step.  And if you've ever used Power Query before, you know that modifying a Custom Function is far from a beginner friendly thing to do.

As it happens though, the Power Query team has given us a way to easily modify the custom function, it's just… well… let's just say that the name isn't as intuitive as it could be: image

So this is the secret… if you select the "Transform Sample Binary from Combine Binaries" (what a mouthful!) it takes you to the template that the function is actually using.  To put this another way… any changes you make here will be used by the custom function when it imports the data.

So here's the changes I made in the Transform Sample:

  • Removed the Change Type step
  • Filtered Dept to 120
  • Renamed TranDate to Date
  • Renamed Sum of Amount to Amount
  • Change Type with Locale on the Date column to force it to US dates
  • Removed Errors from the Date column
  • Change Type on the other columns

In other words, I do the majority of the cleanup work here based on the single file.  The end result for the Transform Sample query looks like this:

SNAGHTML63c4468

Hopefully this makes sense.  It's a fairly straight forward transformation of a CSV file, but rather than doing the work on the files that have already been combined, I'm doing it in the Transform Sample query.  Why?  Because it pre-processes my data before it gets combined.  And the cool thing?  Now we go back to the main query I started with:

SNAGHTML63e5508

The results have already propagated, and we're good to go.

Thoughts and Strategies

Overall, I think the new Combine Binaries experience is a good thing.  Once we know how to modify it properly, it allows us some great flexibility that we didn't have before - at least not without writing our own custom functions.  There are a couple of things we do need to think about now though.

Name Your Queries Early

Under this new method, I'd highly recommend that you change the name of your query BEFORE you click that Combine Binaries button.  All those samples on the left side inherit their name from the name of the query, and Power Query is not smart enough to rename them when you rename your query.  Here's a comparison of two differently named queries:

SNAGHTML988b327

So while I used to think about renaming my query near the end of the process, we really need to think about that up front (or go back and rename all the newly created queries later.)

Where Should You Transform Your Data?

The next big thought is where you should run your transforms… should you do it on the master query (the one with the consolidated results), or the sample used for the transformation function?

The answer to that kind of depends.  A few instances where this could really matter:

  1. Filtering: I would always filter in the sample.  Why spend the processing time to bring in ALL of the data, only to filter it at the end.  Much better to modify the Transform Sample to avoid bringing in the records in the first place.
  2. Sorting: I would suggest that most times you'd want to run your sorts in the master query.  Let's say you have a file for each month, and you sort by date.  If you do that it in the master query, things will be sorted sequentially.  If you do it in the Transform Sample, it will be sorted sequentially within each file, but if the files are out of order, your master will still be out of order until you re-sort by date.  (Apr comes before Jan in the alphabet.)
  3. Grouping:  This one is tricky… most of the time you'll probably want to load all of your transactions into the master query, then group the data.  So if I want to group all transactions by month and account code, I'd do it in the master query.  But there are definitely instances where you'll want to pre-process your data, grouping the data inside the individual file before it lands in your master query.  A case in point might be the daily payments list for payments you issue to vendors.  If they are batched up then sent to the bank, you'd want to import the file with all daily records, group them, then land that pre-grouped data into your master query.  That is an operation that you may wish to do at the Transform Sample level.

The good news is that it is easy to experiment here and switch things up as you need to.

And One More Major Impact…

This blog post has really focussed on how the new Combine Binaries experience changes impacts importing text files.  What it doesn't cover however, is the why it was REALLY built.  I'm going to cover this in tomorrow's blog post: how the new Combine Binaries experience allows one click consolidation of Excel files!

Power Query Dependencies Viewer

The November 2016 update is now out and it finally brings a way to view the Power Query dependencies viewer.  While it’s been out in Power BI Desktop for a while, (as Matt posted about a while ago,) this is huge news for Excel, as this feature has been badly needed.

Viewing Power Query Dependencies

To get to the Power Query Dependencies view, you simply need to perform the following steps:

  • Edit any query (just to get into the Power Query editor)
  • Go to the View tab
  • Click the Query Dependencies button

image

Once you do so, you’ll be launched into the Power Query dependencies windows as shown below:

image

At first glance…

So at first glance, this is pretty exciting and yet – if you work with complicated Power Query setups like I do – you’ll find the Query dependencies view a bit… lacking in some areas too.

First off, if your query is complicated, it really does open that small.  Wow.  Now there is a scaling button down the bottom, but that quickly scales so that stuff is off-screen.  No problem, right?  We’ll just drag some stuff around then… oh… except we can’t.  Dragging any box around drags the entire model, not just that one box.  Sad smile

What can you do with the Query Dependencies viewer?

Rather than focus on the stuff we can’t do, I want to take a look at what we can (although I won’t be able to help making a couple of suggestions here as well.)

Maximizing the model layout

The first thing to point out is that despite the fact that it isn’t obvious, the window is resizable.  If you mouse over any border or corner you’ll get the arrows that indicate you can left click and drag to make the window bigger.

So normally the first thing I do is:

  • Move the window to the upper left of the screen
  • Drag the bottom right corner to make the model fill the entire screen
  • Click the little box icon in the bottom right corner by the scroll bar to “Fit to Screen”

After all, the reason I’m using this view is because the models are big!

Some things that would be really useful here:

  • It would be awesome if there was a Maximize button near the X in the top right (like the Power Query window and every other app has.)
  • It would also be good if we could double click the title bar and have it maximize the window (again, like so many apps out there.)

Either (or both) of those features would save me a lot of time.

Alternate Views for Tracing Query Dependencies

In the default view, the data sources are plotted at the top, and the queries cascade down below.  Fortunately you’re not stuck with this view, there are four different ways to display the model:

image

In this instance I’ve chosen Left to Right, which puts the data sources  on the left and fans the query dependencies out to the right hand side.

Honestly, if I had my preferred way it would probably be to use Bottom to Top (data sources at the bottom and data model tables on the top.)  To me this should basically “bubble up” the model tables to the top of the screen.  Unfortunately it doesn’t quite work like that… all we can guarantee is that the data sources will be at the bottom, but the model tables could end up anywhere.

Ideally, I’d love to have an option to force the Data Sources to be lined up based on the first choice in that menu, and the Load Destinations (whether table or data model) be lined up in the viewer based on the option chosen for the second choice.  This would allow me to easily see the “From” and “To”, with the chain of what happened in between.

Tracing Query Dependencies

In the image below (click on it to see the larger version), I’ve selected one of the tables in the middle of the query dependencies tree:

image

The effect is that it highlights all child and dependent queries in the data flow.  That’s cool, and I’m okay with this being the default behaviour when I select a query step.  Wouldn’t it be cool though, if we also had:

  • A right click option to trace precedent queries only
  • A right click option to trace dependent queries only

Those would be super helpful in tracing a queries flow without the extra noise, something that is really important in able to quickly dig in to the key factors you probably want to know about your query dependencies.

Identifying Load Destinations

So the very first thing I did when I threw this specific model into the query dependencies view was identify two queries that were not in the query chain.  “Awesome,” I though, so I went and deleted them.  Then I restored from backup, as one of them was in use!

Don’t get me wrong, the view was correct, it’s just that the distinction for load destinations is so weak that I saw no arrows and assumed it was good to be removed.  As it turns out, the words actually matter here:

image

The Day Types table is created from a hard coded list.  Since there are no queries flowing in or out of it (it is floating above the lines) I nuked it.  I missed the fact – especially with it being on the left), that it was actually loaded to the data model.

Raw Data-Departments, on the other hand, is pulling from the Current Workbook and is loaded as “Connection Only”.

So here’s my thoughts here:

  • I’d love to see nodes that are loaded to worksheets or the data model identified.  Either an icon in the top right, or a shading  in place would be ideal.  Something that makes them a bit less subtle than they are today.
  • I’m not a fan of the “Not loaded” term… it’s about as awesome as the “Load Disabled” that Power Query used to use about two years ago.  This should – in my opinion – be consistent with the UI and should read “Connection only”.  Not loaded makes it look like it isn’t working.

Navigating Query Dependencies

One of the issues I had above is that my Day Types table – being standalone – should not sit on top of any arrows… that’s just scary bad and misleading… but that’s actually part of a much bigger issue as this is kind of the style used throughout the entire tool:

image

This also leads me to another issue in that I need to be able to follow these arrows.  Today the only ability you have – because you can’t move the boxes – is to essentially print the query dependencies window (you’ll need screen capture software for that since there isn’t a print button) – and trace with a highlighter.

What I’d love to see in this instance is the ability to select a single (or multiple arrows) and have them turn bold.  It would be an even bigger bonus if they shaded the tables on each end of the arrow and allowed you to select multiple arrows.  That would actually solve a few issues mentioned earlier too, allowing us to really drill into the relationships we need to trace.

Overall Impressions of the Query Dependencies Viewer

Overall it’s a good first version.  I’d really love to see some (or all) of the improvements I mentioned above, but it’s a HUGE amount better than what we had a month ago.  Smile

Extract Data from a Mixed Column

More and more I’m seeing examples where people are trying to extract data from a mixed column.  In other words, they have two data types in a single column, but need to find a way to extract one from the other.

Examining the issue

The sample data I’m using can be downloaded from this link.

I’m going to use Power BI Desktop for this, but the results will look identical in Excel using Power Query (except for the colour, of course.)

So let’s get started:

  • Get Data (new Query in Excel) –> From CSV –> MixedDataInColumn1.csv
  • Promote First Row as Headers

The issue can be seen in the red circles below… the report author injected the name of each vendor for the parts above their first part in the list.

image

So the issue here is how to extract the vendor name from Part No column.  The problem is that there isn’t any obvious way to do this.  We have different textual values in all columns, which could change over time.  There’s really nothing that we can test for reliably in this case.

How to Extract Data from a Mixed Column

There are actually a few different ways to extract data from a mixed column… a few of which we demonstrate in our Power Query workshop.  I’m going to show just one here.

Step 1 – Identify a column with a pattern you can exploit

The key we are really looking for is a column which has values or dates for all rows other that the one with our vendors.  In this case we actually have two: Part No and Cost.  Both have text on the Vendor lines, but what looks like values on the rest.  The challenge we have here is that we can’t always guarantee that Part No won’t have text in it.  It’s completely possible we could see a part number like TH-6715 or something. So this leaves us with the Cost column.

Step 2 – Duplicate the identified column

This next set of steps is actually the trick that lets us work this out.

  • Right click the column in question and choose Duplicate Column
  • Right click the Cost – Copy column –> Change Type –> Whole Number
  • Right click the Cost – Copy column –> Replace Errors –> null

You should now have null values where the textual values were located:

image

Step 3 – Use a little conditional logic

We now have something that we can use in order to extract the Vendor name.  So let’s build a little bit of conditional logic:

  • Add Column –> Conditional Column
  • Configure the Conditional Column as follows:

image

The only trick here is to make sure you change the Output to a column so that you can select from the list of columns.

  • Click OK
  • Right click the Vendor column –> Fill Down

The result is shown below:

image

Step 4 – Clean up

We’re now at the point of clean up which entails:

  • Filter the Cost – Copy column to remove null values
  • Delete the Cost – Copy column
  • Set the data types on all columns

The results now look as follows:

image

At this point we can commit the query and we are good to go.

Final Thoughts

This is not a new trick by any means; I’ve been using it for a long time.  The biggest key is really about identifying patterns and thinking outside the box.

It’s unfortunately very easy to get focused on the primary column we want to solve, and lose site of the others.  (Trust me, I’ve been there too.)  Sometimes though, when a column is particularly tough to deal with, we need to just step back and take a look at the bigger picture to see if there is a pattern in another column that we can exploit.  In fact, I’d say that this is probably one of the most important things to master when working with Power Query.

Using Aggregate to Count Visible Rows

In this post I’m going to show one of my favourite financial modeling tricks: how to use Aggregate to Count Visible Rows.

Background

Often, when I’m building models in Excel, I like to group key assumptions at the top of the worksheet in one area. This allows me to change them easily from a centralized location. The problem is that sometimes I need to collapse them to see more of the model.  Of course, you can use this trick to collapse any block of rows (or columns) in your worksheet, so it’s applicable to all kinds of uses.

Let’s take a look at the basic setup:

image

So it’s essentially a block of cells to capture key rates and stats.  No secret there.  And on the left I’ve added some outlining so that I can collapse it easily.  To do that we simply select rows 3:6 and go to Data –> Outline –> Group.

The Trick in Action

Now, check this out… I click the – on the left, and the rows collapse.

image

But check out the message in cell A7.  It wasn’t there before, but now we’ve got a nice message that not only tells you there is an area that is collapsed, it also leads the user as to how to show the rows again.

Using Aggregate to Count Visible Rows

The trick to this is using the AGGREGATE function (which works in Excel 2013 or higher).  So let’s check out how this works.

As AGGREGATE gives us back a count of rows, we will be able to test if the number of visible rows equals zero, and the react to it using an IF function.  So let’s get started.

AGGREGATE's first parameter: the Aggregation Type

=IF(AGGREGATE(

When we open the parenthesis, we are prompted for the first parameter.  There are a variety of options here, but the one I want is COUNTA(), which allows us to count the number of completed cells (either text or values):

image

Next up we put in the comma and we’re on to the second parameter.

AGGREGATE's second parameter: What to aggregate

image

Aha!  So using 5 will allow us to apply the COUNTA(), but ignore any hidden rows.  So it’s this parameter here that allows us to use AGGREGATE to count visible rows only.

AGGREGATE's third parameter: The data to aggregate

On to the next comma and now we need to select the range to count.  Now in this part we have two options.  Personally, I prefer to provide the range of the cells that will be hidden.  In truth though, you only really need to refer to a single cell in the range that will be collapsed.  Here’s what I went with:

=IF(AGGREGATE(3,5,A3:A6

Wrapping up the IF test

Perfect, and now we can just close the parenthesis and complete the test:

=IF(AGGREGATE(3,5,A3:A6)=0,

So, if the count of visible cells equals zero then… what do we want to do?

The IF test: If there are no visible rows...

This is the part that I think really makes this trick work.  I really like providing the arrow key to point to the + icon that shows up, and adding the additional wording as needed.  This allows my users to know not only that there is hidden data, but how to display it again.  So for me, that message might look like:

  • “<-- Show assumptions”
  • “<--Click to expand Revenue assumptions”

You get the idea.  For this example I’ve gone with the following:

=IF(AGGREGATE(3,5,A3:A6)=0,“<-- Show assumptions”,

The IF test: If there are visible rows...

And finally, we round it off with the messaging to provide if the count of visible rows is greater than zero (i.e. if the section is expanded).  Depending on what you want your model to do and how you want to display things for your end users, this could be something like:

  • “End of Assumptions”
  • “Total Revenue”
  • “Please insert new rows above this line”
  • “”

I think the first three are fairly self explanatory, but the last one is essentially two sets of double quotes.  Since everything between the quotes is returned to the cell as text, and there is nothing between the quotes, we get an blank cell.

The complete formula to use Aggregate to Count Visible Rows

Using that method, the finalized formula reads as follows:

=IF(AGGREGATE(3,5,A3:A6)=0,“<-- Show assumptions”,””)

Final Thoughts

My clients love this little trick. It’s fairly easy to set up, and is super useful for allowing people to hide/show the model sections that they want/need to review, without having them bogged down with all the info.

I also find it very useful when we’ve got multiple scenarios laid out on the worksheet. Say I need to look at… scenario 1 and 3 at the same time, I can compress 2 and just focus on the stuff I need to look at, avoiding scrolling up and down.

Values Become Text After UnPivoting Other Columns

Have you ever set up a nice query to UnPivot other columns, only to find that the query data types change when you add new columns?  This post will cover why values become text after unpivoting other columns.

Background

We’ve got a nice little table called “Data” showing here.  Nothing special, it just summarizes sales by region by month, and our goal is to unpivot this so that we can use it in future Pivot Tables.  (You can download the source file here.)

SNAGHTML552a2a7

Now, you will notice that April’s sales are outside the table. This is by design, and we’ll pull it in to the table later when we want to break things.  Smile

UnPivoting Other Columns – The Hopeful Start

If you’ve been following my blog for any period of time, you’ve seen this, but let’s quickly go over how to unpivot this:

  • Select a cell in the table
  • Go to Power Query (or Data in Excel 2016) –> From Table

We’re now looking at the Power Query preview of the table:

image

Great, now to unpivot…

  • Hold down the Shift key and select the Country and Prov/State column
  • Right click the header of either of the selected columns and choose Unpivot Other Columns
  • Right click the headers of the two new columns and rename them as follows:
    • Attribute –> Month
    • Value –> Sales

Re-Pivoting from the Data Model

With the table complete, I’m going to load this to the data model and create a Pivot Table:

  • Go to Home –> Close & Load –> Close & Load To…
  • Choose to Load to the Data Model

The steps to create the Pivot depend on your version of Excel:

  • Excel 2013: Go in to Power Pivot –> Home –> PivotTable and choose a location to create it
  • Excel 2016: Click any blank cell and go to Insert –> PivotTable.  As you have no data source selected, it will default to using the data model as your source:

Iimage

With the PivotTable created, I’ve configured it as follows:

  • Rows:  Country, Prov/State
  • Columns:  Month
  • Values:  Sales

And that gives me a nice Pivot like this:

image

Let’s Break This…

Okay, so all is good so far, what’s the issue?  Now we’re going to break things.  To do that, we’re going to go back to our original data table and expand the range:

image

In the picture above, I’ve left clicked and dragged the tiny little widget in the bottom right corner of the table to the right.  The table frame is expanding, and when I let go the Apr column turns blue, indicating that it is now in the boundaries of the table.

With that done, I’m going to right click and refresh my Pivot Table, leaving me with this:

image

Huh?  Why was the sales measure removed?  And if I drag it back to the table, I get a COUNT, not a SUM of the values?  And even worse, when I try and flip it back to SUM, I’m told that you can’t?  What the heck is going on here?

image

Importance of Power Query Step Order

To cut to the chase, the issue here is that when we first created the table in the data model, the Sales column was passed as values.  But when we updated the data to include the new column, then Sales column was then passed entirely as text, not values.  Naturally, Power Pivot freaks out when you ask for the SUM of textual columns.

The big question though, is why.  So let’s look back at our query.

Our original data set

If we edit our query, we see that the steps look like this:

image

To review this quickly, here’s what happened originally

  • Source is the connection that streams in the source data with the following columns:

image

  • Changed Type set the data type for all the columns.  In this case the Country and Prov/State fields were set to text, and the Jan, Feb & Mar columns were set to whole number.  We can see this by looking at the icons in the header:

image

Note that if you don’t have these icons, you should download a newer version of Power Query, as this feature is available to you and is SUPER handy

 

  • We then selected the Country and Prov/State columns and chose to Unpivot Other Columns.  Doing so returned a table with the following headers

image

Notice that the first three columns are all textual, but Sales is showing a numeric format?  Interestingly, it’s showing a decimal format now, but it shows the numeric format because all unpivoted columns had explicitly defined numeric formats already.

The final steps we did was to rename our columns and load to the data model, but the data types have been defined, so they were sent to the data model with Sales being a numeric type.

Why Values Become Text After UnPivoting Other Columns

Okay, so now that we know what happened, let’s look at what we get when we step through the updated data set.

  • First we pulled in all the columns.  We can plainly see that we have the new Apr column:

image

  • The Changed Type step is then applied:

image

Hmm… do you see that last data type?  Something is off here…

So when we originally created this query, Power Query helpfully pulled in the data and applied data types to all the existing columns.  The problem here is two-fold:  First, the Apr column didn’t exist at the time.  The second problem is that Power Query’s M language uses hard coded names when it sets the data types.  The end effect is that upon refresh, only the original columns have data types defined, leaving the new columns with a data type of “any” (or undefined if you prefer).

  • We then unpivoted the data, but now we see a difference in the output

image

Check out that Value column.  Previously this was a decimal number, now it’s an “any” data type.  Why?  Because there were multiple data types across the columns to be unpvioted, so Power Query doesn’t know which was the correct one.  If one was legitimately text and Power Query forced a numeric format on it you’d get errors, so they err on the side of caution here.  The problem is that this has a serious effect on the end load to Power Pivot…

  • Finally, we renamed the last two columns… which works nicely, but it doesn’t change the data type:

image

Okay, so who cares, right?  There is still a number in the “any” format, so what gives?

What you get here depends on where you load your data.  If you load it to the Excel worksheet, these will all be interpreted as values.  But Power Pivot is a totally different case.  Power Pivot defaults any column defined as “any” to a Text data type, resulting in the problems we’ve already seen.

Fixing the Issue

For as long as we’ve been teaching our Power Query Workshop, we’ve advocated defining data types as the last step you should do in your query, and this is exactly the reason why.  In fact, you don’t even need to define your data types in the mid point of this one, that’s just Power Query trying to be helpful.  To fix this query, here’s what I would recommend doing:

  • Delete the existing Changed Type step
  • Select the final step in the query (Renamed Columns)
  • Set the data type for each column to Text except the Sales column, which should be Decimal Number (or currency if you prefer)

image

When this is re-loaded to the Data Model, you’ll again be able to get the values showing on the Pivot Table as Sum of Sales.

Avoiding the Issue

Now, if you don’t want Power Query automatically choosing data types for you, there is a setting to toggle this.  The only problem is that it is controlled at a Workbook level, not at a global Excel level.  So if you don’t mind setting it for every new workbook, you can do so under the Power Query settings:

image

Is Changed Type Designed in the Correct Way?

It’s a tough call to figure out the best way to handle this.  Should the data types be automatically hard coded each time you add a new column?  If the UnPivot command had injected a Changed Type step automatically, we wouldn’t have seen this issue happen.  On the other hand, if a textual value did creep in there, we’d get an error, which would show up as a blank value when loaded to Power Pivot.  Maybe that’s fine in this case, but I can certainly see where that might not be desirable.

Personally, I’d prefer to get a prompt when leaving a query if my final step wasn’t defining data types.  Something along the lines of “We noticed your final step doesn’t declare data types.  Would you like me to do this for you now (recommended)” or something similar.  I do see this as an alternate to the up-front data type declaration, but to be honest, I think it would be a more logical place.

October News and Events

It’s a busy month here at Excelguru. Instead of a technical post we wanted to catch everyone up on our October news and events!

Live Course: Master Your Excel Data October News and Events

Ken is teaching a LIVE, hands on course in Victoria, BC on Friday, October 21 from 9:00am-4:30pm. This session is great for anyone who has to import and clean up data in Excel and will change the way you work with data forever! Ken will teach you how to use Excel Tables, Pivot Tables and Power Query. Space is limited to only 20 attendees, so don't miss out on your chance to sign up. For full details and to register for the session, visit: http://www.excelguru.ca/content.php?291-Live-Course-Master-Your-Excel-Data.

October News and Events: Power BI Meet-up

The next Vancouver Power BI User Group meet-up is happening on Thursday, October 13 from 5:30-7:00pm. Scott Stauffer, Microsoft Data Platform MVP, will be presenting on How to Operationalize Power BI. Together we’ll look at some solutions that might help pass your Power BI solution over to IT to manage enterprise-wide. Dinner and soft drinks will be provided. View the full details and sign up to attend at: http://www.meetup.com/Vancouver-Power-BI-User-Group/events/234126999/.

Microsoft MVP Award Received

For the 11th straight year, Ken has received the 2016 Most Valuable Professional Award from Microsoft! The previous 10 years, Ken’s award has been in the Excel category, but this year’s award is in the Data Platform category. The new category reflects the work he’s been doing this past year with Power Query and Power BI. Congratulations Ken, your guru status remains assured.mvp_horizontal_fullcolor

Our Team Has Grown

As we mentioned the other day, Rebekah Sax has recently joined the Excelguru team. She brings with her a wealth of experience in marketing, communications, event planning and administration. Please join us in welcoming Rebekah as she helps us make new connections and continue to grow.