Calculate Start and End Dates

I got an email from a reader this morning who asked how to calculate start and end dates for a given employee when they have had multiple terms of employment.  Since it's been a while since we've had a technical post on the blog, I thought that this would be a good one to cover.

The Challenge

In this case (which you can download here) we are given the table shown below on the left, and we need to create the table shown on the right:

image

As you can see, John's start date needs to be listed as Jan 1, 2013 and his end date needs to be listed based on the last date he worked here; Oct 31, 2016.

How to Calculate Start and End Dates using Power Query

My first thought was "we'll need a custom function to do this", but as it turns out, there is a a MUCH easier way to accomplish this, and it's 100% user interface driven as well.  I'm virtually certain that the performance will also be much better over larger data sets as well (although I haven't specifically tested this.)

Let's take a look:

Step 1: Connect to the data

This is pretty easy, just select the table and use Power Query to connect to the data:

  • Excel 2010/2013:  Power Query --> From Table
  • Excel 2016: Data --> From Table/Range

We'll be launched into Power Query and will be looking at our short little table:

image

Step 2: Calculate Start and End Dates via Grouping

The trick here is to actually use Power Query's Grouping feature to calculate the start and end dates.  To do this:

  • Go to Transform --> Group By

The dialog will open and is already offering to group by Name, which is what we need.  Now we just need to select the grouping levels.  The first is going to be our Start Date, so we'll rename it as such and change to calculate a Min of the From column:

image

The effect here is that this will provide the lowest value from the "From" column for each employee.  A perfect start.

Next, we need to add a new grouping level to get the End Date.  To do that:

  • Click Add Aggregation
  • Configure the new column as follows:
    • New column name: End Date
    • Operation: Max
    • Column: To

It should look as follows:

image

And believe it or not, you're done!

image

How this works

The key here is that the grouping dialog in Power Query works for all records in the group.  This is really important, as the first column has no bearing on subsequent columns… if it did, we'd get the max for the first record, which is not at all what we'd be looking for.  Instead, the Group By will restrict to find all records for John, then will pull the Min and Max out of the remaining three rows, returning those as the values.

The other thing that is worth noting here is that the order of the source data is irrelevant.  We could have provided either of these options and the answers would still have been calculated correctly:

SNAGHTML19ec2a56

Final Notes

It's also worth mentioning that this technique to calculate start and end dates will also work in both Excel and in Power BI, as the feature set is identical between the two products.

Sometimes things that look hard, are actually really easy when we have the right tools in our hands, and this happens to be one of those situations.  Smile

Using Power BI When You’re Not a Data Person

This week's blog post on using Power BI when you're not a data person is a guest post by Rebekah Sax of Excelguru Consulting Inc.

Last week I participated Excelguru's first ever "Data to Dashboards" Power BI Boot Camp. Seventeen of us lucky attendess were introduced to the incredible power of Power BI. Just to be clear, I am NOT a data person. My background is in marketing and communications, which is the main focus of my role here at Excelguru. So I was a bit nervous about being the only participant that was not in finance/accounting, an Excel jockey or a data analyst of some sort. Would I be able to keep up with the class and learn this brand new platform? How could someone go about  using Power BI when you're not a data person who usually works with numbers?

Welcome to Power BI

The boot camp consisted of three jam-packed days of hands-on exercises led by Ken Puls. We started off nice and easy with creating some basic visuals we are all familiar with, such as map, bar and column charts. By the end of the first day, we had learned how to build reports and dashboards using multiple data sources, and how to publish and share them. Later, we began writing DAX measures for filtering and shaping our data. This is where I learned that I CAN CODE! However, there is also a great feature called Q&A. This is a "query interface" which allows you to pull insights from your data using regular English, no coding required. For example, you can type in "number of locations" or "total sales in May" and Power BI will build you a basic visual that shows the information, pulled from your source data table.

For the last part of the day, we had fun playing around with cool custom visuals from the Power BI Custom Visuals Library. These are a great way to tell a story with your data by making it more visually interesting and engaging. Some of the options are a little whimsical. I mean, I don't know why I would ever need to represent data as fish swimming in a tank, but I could if I wanted to.

Power BI Custom Visuals Library

The number of custom visuals available in the Power BI library is really impressive, and many of them are free.

Cleaning and Working with Data

Our second day began by looking at the ways Power BI gets, transforms and loads our data using Power Query. This amazing tool is also used in Excel, so the techniques we learned are transportable between Excel and Power BI. I'm glad I had been previously introduced to Power Query, but some things were still challenging. For example, it took a while to wrap my head around Power BI's six different types of joins for merging two tables together. Luckily, I can use the copies of the example slides to use as a reference later on.

Inner Join

The "Inner" join type in Power BI. This join gathers only records which exist in both tables.

It was interesting to learn different techniques for breaking apart and recombining data from various sources into one cohesive data set. We also dove into formulas using M code (the Power Query language) and learning intermediate DAX measures (yikes, more coding!).

The Loaded Pencil

What really blew me away was when we started applying these measures to sales data from a fictional restaurant called the Loaded Pencil. First, we set up a calendar table and used the Calculate measure to start pulling out key metrics such as sales for a particular period. Another measure allowed us to compare to sales for the same period the previous year, quarter or month. Using different visuals, we easily filtered the data to view specific sales details. Our report even allowed us to look at the broad categories and then drill down right into specific items. We then set up cool visuals displaying food and alcohol sales as compared to budgeted goals. Now this was something I could sink my teeth into from a marketing perspective. I began to see real-life applications for using Power BI when you're not a data person!

The Loaded Pencil Sales Report

Sample visuals looking at the sales data from our fictional restaurant, The Loaded Pencil.

Taking It Mobile, RLS and Developing Like a SQL Pro

The first part of day three was really fascinating, where we focused on making our reports and dashboards mobile-friendly. Essentially, we built a mobile app displaying the outlets of a well-known coffee shop chain. It showed all the locations around BC and even let us target specific regions or cities, so you could find where to get your next caffeine fix. (Unfortunately, star points aren't tracked on this app, so I'll keep using the official one.) I could really the possibilities of mobile-accessible reports for a team with members regularly out of the office on site visits, client meetings, sales calls, etc.

Making Data Mobile

An example of our coffee shop Power BI Desktop report and the mobile-friendly version. Note that on the mobile version, the data set has been filtered to show just the Lower Mainland locations.

While the topic of Row Level Security are not something I need to worry about in my role, it was interesting to see the different methods that they can be created and applied to restrict the information shown to only that which a reader is entitled to see.

Using Power BI When You're Not a Data Person

One of the things that sticks out the most for me was chatting with another participant at the beginning of day two. She was so excited that she was already able to start using some of the stuff we learned the day before to her job. I have to say, my head was full to bursting by the end of the boot camp, but I did gain a few big insights:

  1. Using Power BI when you're not a data person to create a report can make it easier to present and understand the key information. This is especially true if you are like me and get overwhelmed by row upon row of numbers as on a traditional spreadsheet.
  2. Because you can move from the "big picture" down into specific detail with just a few clicks, you can make better connections and insights. You may even be able to pick up on something that has gone unnoticed but is having an impact.
  3. Being able to access precise, up-to-date information while on the go is huge. Imagine being able to answer a specific question right on site, without having to go back to the office and calling or emailing your client later.

Just remember, you don't have to already be an Excel expert to be able to use Power BI. It's certainly complex and can seem daunting, but the instruction here was fantastic and - if there's stuff that wasn't covered - it's great to know that there is also passionate community always willing to help someone trying to learn.

Solving Business Problems – Power Query Does That

This week's blog post on solving business problems with Power Query is a guest post by Alex Jankowski

Does your business or work group have manual tasks that are repeated regularly, such as data re-entry, copy/paste functions, sending the same old email, cross-referencing lists, etc.? We can automate these kinds of activities with a wide variety of tools, such as Excel (including VBA, Power Query and Power Pivot), Access, SharePoint and MS Power BI, as well as with other, more sophisticated software packages.

Solving Business Problems You Didn't Even Know Existed

I have found the following to be common themes that can inhibit identifying and solving business problems:

  1. Organizations are illiterate about their data.
  2. Organizations are allergic to cost.
  3. Many managers lack technical imagination. They can’t even conceive of the problems that the organization’s data could solve, let alone conceptualize potential solutions.
  4. Organizations think that only financial data is worthwhile.
  5. Organizations roll out massive ‘suites’ without a concept about how to get their money’s worth from the licenses. My guess is that on average less than 25% of the capabilities of MS Excel are utilized due to lack of training/imagination, which is not a good ROI.
  6. Organizations think that the IT group has a handle on this, whereas IT thinks that data belongs to the business or prevents anyone from accessing important data (IT acting as a “department of productivity prevention”).

And I’m not even talking about the mission critical data that resides in big ERP systems (including HR, Corporate Finance, Time Records, and Project Financials) or “Big Data” that everyone is looking over their shoulders at.

What about:

  • Detailing a list of technical drawings on a project?
  • Tracking hours spent on a project activity, or the employee vacation schedule against entitlements?
  • Creating a dataset of employee skills and performance reviews?
  • Linking that scheduling software package to a list of real deliverables?

The Basic Win - Owning Your Data

Every business has what can be termed ‘islands of data’ which are often generated and managed manually.

Islands of Data

There are 'islands of data' that exist in every organization

Let’s say that your business splurges to automate and standardize some elements of the work process to be more effective. What are the benefits? Well, there will be a reduction in repetitive costs that offset the application implementation costs. People can then graduate from data makers to data owners (where they spend more time analyzing report data rather than creating the report itself), and begin solving business problems by making smarter, data-driven decisions based on questions like, "Is that it? Was it worth it? Where’s the big payback?"

The BIG payback is in linking the ‘islands of data’ with each other to gain unexpected insights. Don’t believe me? Let's take a look at a simple real-life example.

The Unexpected Win

A few years back, I was working in an engineering firm with a very large project. The Operations Manager (thankfully, a guy with great data imagination) wanted to a way to track piping isometric drawings as they were reviewed by various project leads. The drawings would get lost in the review process, like when the head of electrical left them on his chair when he went away for a month in the summer. So we came up with a solution to identify every isometric in the review process, who had approved it, when it was approved, and where it was going next. We met the application goal and stopped losing the isometrics in the review process.

Sample Isometric Drawing

Sample Isometric Pipe Drawing (from http://www.svlele.com/drawings.htm)

Was that the BIG win? Nope. The big win was when we started matching the list of drawings that had completed the review cycle with the list of drawings issued for manufacturing in the document control system (which would have been a great opportunity for Power Query – I wish we had it then). Guess what? We found a small count of approved drawings which had not been issued to manufacturing.

The Real Cost of Not Doing the Data Work

You're thinking, "So what?" Well, each isometric drawing represents a section of pipe that is fabricated by the manufacturer with flanges, valves, fittings, etc. The fabricated pipe is then sent to the construction site holding yard to wait its turn to be assembled in the pipe rack by the construction team. The impact of the drawing not being issued to the manufacturer is that the pipe section is not on site when it’s time to construct, and nobody recognizes it is missing because the drawing was never issued.

What’s the cost of that?

  • extra cost: Rush to find and issue the missing isometric drawing
  • Extra cost: Construction team has to re-sequence its work to allow for the missing pipe section
  • Extra Cost: Rush fabrication by the pipe manufacturer, possibly resequencing their production schedule and possibly charging overtime rates
  • EXTRA Cost: Rush shipping charges to get the late pipe section to the construction site
  • EXTRA COST: Possible overall delay in the construction schedule
  • BIG EXTRA COST + LAWSUIT: Possible overall delay in the refinery startup schedule

That extra cost could be thousands of times more than the cost of doing the data work, and possibly end up eating all of the project’s margin and more. (If you’re going to be allergic to cost, this is the one to be allergic to). Creating the application for isometric tracking was a win, but matching it to the document control system drawing list was the BIG win and nobody expected it!

Delivering the BIG wins

Let's take a look at some ways at solving business problems in Power Query using the data you already have. What if you:

  • Need to find files, audit folder contents, or document a folder structure on the server? Power Query does that.
  • Need to develop an email filing system for your project? Power Query can list the emails, their contents, and attachments.
  • Need to open server files from an Excel list without navigating through the File Explorer? Power Query can help with that. (Eliminating the practice of people navigating from "My Computer" into the bowels of the server structure for each individual file is one of my personal lifetime goals.)
  • Need to create a templated report about projects, employees, or hours from an ERP system extract? Power Query does that.
  • Need to pull data from SQL for templated operations reports? Power Query does that.
  • Need to read multiple CSV or Excel files into a single data set? Power Query does that (check out some of Ken's other blog posts, like this one on Combining Excel Files).
  • Need to report on the status of drawings in your document control system? Power Query does that.

Once the 'islands of data' are being processed, Power Query does an awesome job of combining the data sets together, especially if all the column names are different and the data formats need to be aligned.

What data problems do you need to solve? How are you currently solving business problems, and are there opportunities for BIG wins in your business? Give us some feedback in the comments!

 

 

Power BI Boot Camp

I'm pretty excited to announce that we have our very first Power BI Boot Camp coming up in Vancouver, BC on Feb 22-24.  This is something that I've wanted to do for a while, and it's going to be awesome!

image

The boot camp is going to be 3 full days of hands on Power BI, and is intended to get you up and running, building cool solutions that will impact your business.

Why do you care about the Power BI Boot Camp?

Okay, I get it… you're an Excel person, and while you've heard of Power BI, you're not quite sure if you need it or not, and certainly don't know if you can justify the cost of a 3 day Power BI Boot Camp...

I honestly believe that the answer to both questions is an emphatic yes!

Top 5 reasons you need to learn about Power BI:

We'll cover the "how do I convince my boss" near the end of the post, but first, let's talk about what Power BI is, and why it is relevant to you.

  1. A major reason that companies are behind in their Business Intelligence development is that they are using outdated versions of Excel and cannot (or are afraid) to upgrade.   Power BI Desktop is a separate program that sits completely outside Excel, meaning that you can run the most up to date business intelligence tool DESPITE the version of Excel you're stuck on.  I'm looking at you Excel 2003 & Excel 2007 users. Winking smile
  2. You can do a TON of stuff with Power BI for free.  Yes, there are services that cost a nominal fee, but you'll be surprised at how much you can do for absolutely nothing.  The investment you make here will pay back in spades as you begin to automate tasks that were taking you huge amounts of time, and get the ability to deliver true business intelligence to your stakeholders.
  3. Power BI's data collection experience is based on Power Query, and it's modelling experience is based on Power Pivot.  So if you have those skills from your Excel journey, they will be totally relevant in Power BI.  And if you don't?  The new skills you learn at this course related to these areas are portable back to Excel as well.  (So again, if you're stuck on an older version of Excel, this is your chance to upskill BEFORE your Excel is updated.)
  4. If you've ever tried to share Excel dashboards, you know how hard it can be to do so and maintain them.  And if your end users want their reports accessible and interactive on mobile?  Forget it.  But Power BI solves these issues, including delivering interactive reports to your mobile phone (yes, even your non-Windows phones!)  Rest assured, however, that if your company is cloud averse, this tool is not dead to you, and it's still worth coming.
  5. You'll be learning in a small class environment from a world class expert who cares about delivering value.  I've been there.  I know how hard your job is.  And I want to help you become more effective.  That is what my company is all about, and I'm very serious about it.

What will the Power BI Boot camp cover?

To be completely honest, the better question is probably "what won't we look at?"  I've got all kinds of stuff cooking for this.  When I first started drafting the Power BI Boot Camp outline I was wondering how I'd ever fill 3 full days.  Now I'm trying to figure out what to cut so that I can fit it all in.  Keep in mind that it's not going to necessarily be in the order below, as some topics are MUCH bigger than the summary I'm providing, but here's a bit of an overview…

Where it all starts…

We'll start by building a basic Power BI solution built on an Excel file.  Some minor data modifications, some cool visuals and then publishing it to the cloud service, we'll walk through the most basic of Power BI journeys.  I want to do this so that you can see just how easy it is to build a cool solution that works and interacts, like this one.  (Go ahead... click any of the bars and see how it cross filters in your web browser!)

More data, more transformations, and some modelling…

From there we'll start to dive deeper, teaching the techniques that allow you to build more robust models and solutions.  We'll spend a bunch of time looking at different data sources, as well as some of the more complex methods for fixing bad data to make sure that you can use it in your solutions.  We'll also show you how to link these tables together so that they filter nicely across multiple visuals.

image

Let's get visual…

Power BI is all about getting visuals with your data.  It has a big collection of visuals, as well as an ever growing gallery of custom visuals as well.

While I won't promise that we'll look at each different visual in the product (we only have 3 days), we are going to look at a lot of them, including some of my favourite custom visuals.  I want to show you how to create them, format them, and what data you need to get the best of them to work.

image

We're building dashboards to tell a story here, so I want to try and get that as close to real life as possible for you.  From bar, column and line charts, to maps and bullet charts, I really want to get you comfortable with these, as well as how to control the interactions between them.

And speaking of Dashboards… do you know the difference between a report and a dashboard in Power BI?  That will get clearly explained here as well.

Getting the numbers right…

We'll get into working with the DAX formula language, and understand how to add and override filter context to make your formulas show exactly what you want when you filter any visual.

How about Calendar intelligence?  I have a huge passion in this area too.  The Power BI Boot Camp won't just teach you how to build your own calendar table, you'll also leave armed with a collection of DAX calendar intelligence measures.  Those measure patterns are super important as they'll allow you to report results correctly when you ask for Month to Date data or Month to Date for 2 months ago.

image

Sharing is caring!

We'll look at sharing via the web first, both for internal and external users on an invite-only basis.  But even more, I'm also going to walk you through embedding your reports in a web page so that you can show the world how awesome your company is.

In addition, you'll want to bring your phone with the Power BI app installed.  Why?  Because I'm going to make sure you get set up with your reports and dashboards delivered to your phone in a mobile optimized fashion where you can even annotate your reports by hand.

image

And if you'd prefer not to be cloud based?  We'll look at creating Power BI templates that you can distribute within your organization as well.

You know… while we're talking about sharing… have you ever sent a report to a manager only to have them come back with a one of those "can you show me this?" type questions?  Let's try and solve that little problem by allowing our manager to ask their own questions using natural language queries.  In other words, how cool would it be if your manager could just type "Bar chart total stays by city" and it did it, even if you've never built that report?  Because they can.

image

And what about updates and security?

Wait… how do I keep this stuff up to date?  And how do I ensure that only the right people are looking at their data?  Well of course we'll cover those topics!

Not only will we look at scheduling refresh for cloud hosted data sources, but also your on-premises data sets via gateways and 3rd party applications.  Ooohh… connecting data to the cloud… can I hear anyone saying "Security!"  Well yes, so let's talk about both encryption and also how to restrict the data sets to show only relevant rows to the correct people.  (And let's face it, you will want that last part even if you keep your data local!)

So why Ken's Power BI Boot Camp then?

My goal is to deliver real value to you.  Anyone who has been to any of my courses in the past knows this:

  • It's virtually impossible to find someone who has more passion for what they teach than me
  • We limit the class size so that it doesn't get too big.  A huge part of the reason for this is because…
  • My style is to roam the room and help people keep up with the content so that everyone learns
  • I provide completed examples for all the work we go through, with catch up points along the way, just in case the above isn't achievable
  • I provide a ton of supplemental resources as well

I've been to classes before, and I know that it can be somewhat difficult to apply the course materials to your own work.  For that reason the Power BI Boot Camp is designed like all my courses: We will build solutions like you'd expect to do so in the office, exposing the pitfalls where the software doesn't work as you'd logically expect.  This is intentional, as we teach you not only how to do things correctly, but also how to debug problems as they happen.

What skill set do you need?

Do you need experience working with Power BI in order to come to the Power BI Boot Camp?  Heck no, that's what this course is about!  And while some Excel knowledge is helpful, you're certainly not going to see a test on Excel functions here.

Ultimately what you need is a hunger to learn; a desire to change your reporting world.  If you can bring me that enthusiasm, I'll teach you what you need to know in order to make that happen.

Making the Case to Your Boss

I managed an accounting department in my former life, so I totally get this.  We're all about data here, so here's the up front details:

The cost of the course is $1,495 plus GST.  It includes your breakfast and lunch all 3 days, but you need to look after your dinner and hotel room (if necessary).  So now you can work out the total cost.

I'll let you do the math and work out how many hours per week you need to save yourself in order to break even.  If you spend a significant amount of time cleaning up data to be used in Excel, you'll find that you'll break even on this alone.

Honestly though, the true value proposition of this course isn't the ability to break even on a labour basis, it's about the opportunity costs.  Opportunities like:

  • Identifying programs that are costing your company money which don't earn returns.
  • Finding the best paying leads to pursue for greater sales.
  • Delivering value to your stakeholders when and where they need it.
  • Reducing inventory during slow seasons to reduce carrying costs.
  • Highlighting key metrics that are under performing.
  • Building solutions to blow the minds of your clients or stakeholders.

Every business is different, but ultimately it's the opportunity to get better information into the hands of those who make the decisions, be it you or your boss.  That benefit FAR outweighs the price we charge, and will break even for you VERY quickly.

I hope to see you at the course.  It's going to be awesome, and it will change your data life forever. 🙂

Register for the Boot Camp here.

Make the Sample Binary file path dynamic

In this post we will explore how to make the Sample Binary file path dynamic when combining files using the new Combine Binaries experience in Excel and Power BI Desktop.

Sample Files

If you'd like some sample data files to play with, you can download them here.

Why do we even need to talk about this?

I've covered the new combine binaries experience in my last couple of posts on:

But one thing I didn't dig deep into was the Sample Binary file path, and the fact that it actually gets a hard coded file path.  To replicate the issue, here's how I set up my quick test:

  • Open the application of choice (I'm going to use Power BI Desktop here)
  • Get Data (create a new query) From File --> From Folder
  • Browse to the folder path and click Edit

In this case I've browsed to following file path, which only contains a single file (so far):  C:\Users\KenPuls\Desktop\CSVs.  And here's what it looks like in Power BI Desktop or Excel:

SNAGHTML1854513

And now I click the Combine Binaries button at the top right of the Content column, resulting in this:

SNAGHTML18699e5

Now, as I discussed in the first post in this series, we know we can modify the "Transform Sample Binary From…" step to see those changes in the final output.  So what's the issue here?  I'm going to right click the Sample Binary and choose to view it in the Advanced Editor:

image

Note:  I did add a line break between the 3rd and 4th lines, so read that as one.

The key part to notice here is that the file path, despite being in the original CSVs query, is also hard coded into this query TWICE.  That makes it very difficult to port this from one location to another, as simply changing the file path in the CSVs query is not sufficient, it will still break upon refresh.  It's for this reason that we need to make the Sample Binary file path dynamic: so that we only have to change it in one place.

How to make the Sample Binary file path dynamic

To start with, I'm going to throw this solution away and start over completely.  And again, while I'm using Power BI desktop to illustrate the method to make the sample binary file path dynamic, this will work the same in Excel with only one exception. (Once you have the new combine binaries method in Excel, anyway.)

Step 1 - Launch the Power Query editor

To get started, I'm going to launch myself into the Power Query editor, ideally without creating a new query.  This is easy to do in Power BI Desktop, simply go to the Home Tab and click the top of the Edit Queries button.  You'll be launched into the editor without creating any new queries:

image

In Excel, if you've never opened the Power Query editor before, there is no way to get in there without creating a new query.  You'll need to create a New Query --> From Other Sources --> Blank Query.  Then you can expand the Queries pane on the left, right click Query1 and Delete it.  Silly, but that's pretty much the way to accomplish this.  (If you have created other queries, you just need to edit any one to get into the editor, as it won't create a new one for you.)

Step 2 - Create a Parameter for your file path

Before we get started, we need to create a single place to update our file path.  For this we'll use one of the Power BI/Excel parameters.  To do that:

  • Go to Home --> Manage Parameters --> New Parameter
  • Set up the Parameter as follows:
    • Name:  FolderPath
    • Required:  yes
    • Type:  Text
    • Current Value:  <your file path>  (mine is C:\Users\KenPuls\Desktop\CSVs)
  • Click OK

This will result in a rather simple little parameter that looks like this:

image

Step 3 - Create a new query against the folder

Now that we have our parameter, we are ready to actually create the query we need against the folder.  So let's do that now.  (Oh, and if you're working with Excel, just stay in the Power Query editor - no need to go back to Excel first.)

  • Power BI Desktop:  Home --> New Source --> More --> Folder
  • Excel:  Home --> New Source (near the end of the ribbon) --> File --> Folder

When prompted for the folder path, instead of clicking Browse, click the ABC on the left and choose Parameter:

image

It will automatically populate with the FolderPath parameter and, upon clicking OK, will take you to the preview window where you can click OK (Power BI Dekstop) or Edit (Excel.)

Step 4 - Combine the Binary Files

Now we'll combine the binary files… all one of them.

  • Rename the Query to "Transactions"
  • Click the Combine Binaries icon on the top right of the Content column

Just a quick side note here… in the current build of Excel we don't see this, but in Power BI desktop, we are taken to this window where we can control how the data types are determined:

image

This is pretty cool, and I assume it will be coming to Excel in the future too.  If your data types are consistent most of the time, you generally won't have to worry about this.  If, on the other hand, you've got strange things that happen, (say that once every 10,000 transactions you get a fractional sales unit,) you may want to choose the "Entire Dataset" option to avoid truncated decimals.

For now, just click OK with the default to blow past this dialog.

Step 5 - Make the Sample Binary file path dynamic

And finally, here we are, it's time to make the magic happen and actually make the sample binary file path dynamic.  To do this we're going to make a couple of small edits to the Sample Binary's M code.

  • Right Click the Sample Binary --> Advanced Editor

NOTE:  Be sure not to accidentally hit the Sample Binary Parameter1… we don't want that one!

Now, first thing to notice is that the very first line no longer points to a hard coded file path, instead it points to our parameter.  That's very cool as a single update to the parameter means that both this query and the original one to pull the files from the folder will be changed when our parameter is updated.  One place to fix them both.

Now, there are still issues here, but I just want to do a bit of cosmetic cleanup first.  The second and last lines still start with the name of the file.  This is just something inside the M code that you'll probably never read again, but it's still good practice to clean it up:

image

I've change both highlighted parts to read "SampleFile" to make the code a bit shorter.  And now I can focus on the real issue:

image

The highlighted portion above still holds both the hardcoded file path and file name.  What this means is that even though the folder path is dynamic, if I change the parameter and update the file path on a new computer, it will still be pointing to the older source.  That is far from ideal.

Interestingly, it's really simple to fix when you know how.  You simply grab everything from the [ to the ] and replace it with 0 so that it looks like this:

image

To show that it updates properly, I'm going to click Done, and throw a new file in the folder called "Feb 2008.csv".  Since this is lower in the alphabet, we'd expect it to show up before "Jan 2008.csv", and it does when I refresh the preview window:

image

The End Result

The biggest reason I want to make the sample binary file path dynamic is the scenario where I email the solution to someone else, and they have a different file path to the data files.  In this case they now only need to edit the project, update the FolderPath parameter, and everything will work again.

Why Not Edit in the Formula Bar?

In truth, you don't actually have to go into the Advanced Editor to update your code.  I made a cosmetic fix in there, as I actually do go back and read code later.  Since the default step name leaves a red herring in the code, I wanted to nail that down.  If you're never going to read the code though, it's cosmetic.

In effect, all that is really necessary is to replace the code from [ to ] with 0 as we did above:

image

The problem is that if you do this here, it automatically kicks off 3 new steps that have to be deleted:

image

Granted it's not the end of the world, but since I want to clean up the code anyway…

Final Thoughts

You're not alone if you think this should be unnecessary.  In my opinion, this dynamic nature should be standard, and I think it would be an easy fix for the team to implement.  Marcel even posted a suggestion to modify this feature here, which you should consider voting for.

Combine Excel Files

If you've wanted to use Power Query to combine Excel files with a single click - like you could for TXT and CSV files - the feature is finally here*.  The Combine Binaries feature has been re-written (as I discussed yesterday), and it now allows for easy combination of Excel files.

* This new feature in the O365 Fast Insider preview today and in Power BI Desktop's November update.  I'm expecting to see this show up in the January Power Query update for Excel 2010/2013.

Just a quick caveat, I'm going to cover the items specific to the Combine Excel Files experience here, and not dive deep into the methods of how to modify and adjust the created queries.  For that reason, if you haven't read yesterday's post, I highly recommend doing that first.

The Classic Combine Excel Files Experience

For anyone who has tried to combine Excel files in the past, you'll be familiar with this.  You create a new query to pull From File --> From Folder and navigate to the folder full of Excel files.  Then you hopefully click the Combine Binaries button:

image

And you get this:

image

Obviously that's not so helpful.  The answer to deal with this was to go to Add Column --> Add Custom Column and use the formula =Excel.Workbook([Content]) to convert the data into tables. Then some filtering and more user interface driven work was required in order to get your data.  The experience was a bit painful, and certainly not for beginner users.

The New Combine Excel Files Experience

To start with, I'm going to take two Excel files with sales transactions in them and combine them easily using the new combine Excel Files experience.  Here's the file characteristics:

  • Each file has a small set of data (2-3 rows)
  • The data is stored on Sheet 1 in a table
  • Both files are stored in the same folder

To get the data, I'm going to do the following:

  • Get Data using a new query "From File --> From Folder"
  • Browse to and select the folder
  • Click Edit at the preview window

As you can see, we've got a couple of Excel files here:

SNAGHTML9cbff3d

So I'll click the Combine Binaries button (at the top right of the Content column.)

And where this would have triggered an error in the past, it now kicks out a preview:

image

And what happens next depends on what you select (which is why they are numbered above.)

Combine Excel Files - Method 1

For the first kick at this, I'll select the Sample Binary Parameter 1 folder icon (indicated by the number 1 in the Combine Binaries preview.

Nothing will ever show in the preview window, but upon selecting the folder I can click OK, which will result in this:

image

As I showed in yesterday's post on the new Combine Binaries Experience, we now get a bunch of new queries and a few steps along the way.  The end result of this query, however, is a listing of all the worksheets, tables and ranges in each workbook.  This is the view that will allow you to go back and granularly pick out what you need and transform it.  In other words, this is kind of the detailed hard core view which was the equivalent of writing the custom columns that we used to have to do.

Because this is so similar to the classic method, I'm not going to do much more with this.  The real point was to expose that selecting the folder in the preview window will bring you to this setup.

Combine Excel Files - Method 2

Method 2 revolves around selecting the table in the preview window; in this case the Sales table.  When we select that branch in the preview window we WILL get a preview of the data:

image

And when we click OK, we actually get the data combined nicely:

image

As discussed in the previous post, if we wanted to modify and/or change:

  • The Source columns (Source.Name or others): We modify the Removed Other Columns1 step in this query.
  • The data before it is imported and combined: We modify the Transform Sample on the left side.

Now this section is MUCH easier than what we used to have to do!

Combine Excel Files - Method 3

But what if your data is not in an official Excel Table?  What if it's just data in a worksheet range?  Well, then you select the worksheet icon instead:

image

And the results are virtually identical to the previous method:

image

Why does that work?  It works because the Transform Sample is smart enough to automatically promote the first row to headers, so it doesn't actually need the table.  On the other hand, if that data wasn't in the first row, you may need to go back to the Transform Sample and tweak it to do what you need (like remove top rows, promote rows to headers, delete columns, filter, etc.)

Caveats When Trying to Combine Excel Files

This experience will work well for many things, but as always there are some caveats.

Single Object Only

The default experience here is to combine based on the object you select.  In other words, if you select Sheet 1, it will combine Sheet 1 from each file.  It won't combine all sheets in the file based on the Sheet 1 code.  If you want to do that, you need to go back to Method 1 above, filter to the objects you want, and deal with them using classic import methods.  (Unless you try to get real techy and build the function then repurpose it to use in that table - something I have not done yet.)

Preview Based On First Item In the List

The preview and import templates are based on the first file in the list.  You can see how that affects things when I throw a new file into my structure that has different worksheet and table names:

image

While the two Sales workbooks have Sheet1 in them, this one doesn't, making it impossible to use this function to combine the first worksheet in each file. (The template would be based on Dec and would return errors for the other two files.)

If the order is important, you'll need to sort the file list first to get the correct file to the top before you trigger the Combine Binaries function.

For the record, I have sent an email to the Power Query team suggesting that it would be nice to get the option to pick the file here which the template should be based upon.  That would make this much easier to work through, I think.

Inconsistent Columns Are Bad News

Let's say that you have two files with tables in them who have different column names (or quantities).   The transformations generated will actually deal with this correctly, resulting in a column of tables which have different headers.  All good so far, but when the main query gets to the last step, it expands the column of tables based on the headers for the table in the first row only.  This is actually a standard thing, so not a surprise, I just want to make sure you don't think this is a holy grail that will solve the differing column issue.  (We have more work to do in that case.)

Overall Thoughts

At the end of the day, I have to say that this is a pretty welcome addition.  I'm still not a fan of the names of the generated queries, and I would add something to change the template file, but I think this is going to make it a LOT easier for people to import and transform Excel files than it has been in the past.

New Combine Binaries Experience

One of the updates in the latest Excel 2016 Insider preview build is a new Combine Binaries experience for Power Query.  This is both good and bad, and in this post we'll take a detailed look at both sides before it hits you in your Excel implementation.  (With this new feature in the O365 Fast Insider preview and with it already existing in Power BI Desktop's November update I'd expect to see this show up in the January Power Query update for Excel 2010/2013.)

A Little History Lesson

Before we start, let's get everyone on the same page.  If you didn't know this, the Combine Binaries feature is one of the coolest things in Power Query or Power BI desktop, as it let you do a one click consolidation of "flat" files (txt, csv and the like).

How Combine Binaries (Used to) Work

In order to make this work, set up a new query to get data From File --> From Folder, browse to the folder that contained your files, select it and clear the preview window.  At that point all the magic happens when you click the Combine Binaries button at the top of the Content column:

SNAGHTML56229e0

That "one-click" experience would take your source folder and add four steps to your query:

image

And it puts it all together nicely.  So cool, so slick, so easy.

Why did we need a new Combine Binaries experience?

So the first real question here is "Why even modify this experience at all?"  As it happens, there were a few issues in the original experience:

  1. You lost the original file name details.  If you wanted to keep that, you needed to roll your own custom function.
  2. Headers from each file were not removed, so you'd have to filter those out manually.
  3. It could only work with flat files (csv, txt, etc..) but not more complex files like Excel files.

So for this reason, the team decided to build a more robust system that could deal with more files and more data.

The New Combine Binaries Experience

So let's look at what happens in the new experience.  We start the same way as we always did:

  • Set up a new query to get data From File --> From Folder
  • Browse to the folder that contained your files, select it and click OK
  • Click Edit at the preview window
  • Click the Combine Binaries button at the top of the Content column

At this point a whole bunch of crazy stuff now happens, and your query chain looks totally different than in the past:

SNAGHTML610c47b

There are 3 main areas that are different here:

  1. A whole bunch of new queries and parameters with very similar names,
  2. The original source name is retained
  3. Different query steps than in the past

If your first reaction to this is being overwhelmed, you're not alone.  I'll admit that my first reaction to this was not a happy one.  There is a huge amount of stuff injected in the file, it's difficult to follow the relationship in the M code (even if you do know how to read it all), and it isn't intuitive as to what do to with it.

At the end of the day, the biggest change here is that things happen differently in the past.  In the original implementation of the Combine Binaries feature set, it combined the files first, then applied some other Power Query steps.

The new method actually examines the individual files, formats them via a custom function, then appends them.  This is very different, and it actually gives us a bit more flexibility with the files.

What's the end effect that will be different for you?  Simply this:

  • More queries in the chain (maybe you don't care), and
  • The file name is preserved by default (which was not there in the past)

SNAGHTML61d909b

Now if you are good with everything here, then no worries.  Continue on, and you're good to go.  But what if you want to make changes?

Making Changes in the new Combine Binaries Experience

The biggest challenge I have with this new implementation is that if you are a user hitting this for the first time, how and what can you change?

Changing or Preserving File Details

What if I wanted to keep more than just the file name… maybe I wanted to keep the file path.  You'll be surprised, but this is actually pretty easy now.  Since Power Query wrote a custom function for us to import the data, all the file properties were already preserved for us.  But it also made a choice to keep on the source file name.

To modify this, we just look at the steps in the Applied Steps window, click the gear next to "Removed Other Columns", and choose the ones we do want to keep:

SNAGHTML623e100

So in this case, I can actually uncheck the box next to Source.Name and remove that from the output.  (We want to keep the very last step, as that is what actually appends the data from the files).

Also… after looking at the Applied Steps window the Renamed Column1 step was only in place to avoid a potential name conflict if you had a column called Name (there is a really good reason for this which I'll look at in another post.)  In this case it is totally unnecessary, so we can just delete it.

So now our code looks as shown below, and the output looks similar to what we would see in the old experience:

SNAGHTML629962e

Notice I said that it looks similar - not that it is identical.  This is actually better as there are no headers repeating in the data at all, so we don't need to get rid of those.  That is an improvement.

Modifying the Import Function

Now, that's all good for the details about the file, but what about the data?  What if I only want records for department 120?

To understand the impact of this next piece, we need to understand that there are two ways to combine data:

  1. Bring ALL the data into Excel, then filter out the records you don't want
  2. Filter out the records you don't want, and ONLY bring in the ones you do

Guess which will consume less resources overall?  Method 2.  But the challenge here is that Power Query encourages to use Method 1.  You're presented with a full table that is just begging you to filter it at this point… but you are better to deal with it using Method 2… only it's not obvious how to do that.

The kicker here is that the logical flow of Power Query's Applied Steps window has been interrupted with the "Import Custom Function1" step.  And if you've ever used Power Query before, you know that modifying a Custom Function is far from a beginner friendly thing to do.

As it happens though, the Power Query team has given us a way to easily modify the custom function, it's just… well… let's just say that the name isn't as intuitive as it could be: image

So this is the secret… if you select the "Transform Sample Binary from Combine Binaries" (what a mouthful!) it takes you to the template that the function is actually using.  To put this another way… any changes you make here will be used by the custom function when it imports the data.

So here's the changes I made in the Transform Sample:

  • Removed the Change Type step
  • Filtered Dept to 120
  • Renamed TranDate to Date
  • Renamed Sum of Amount to Amount
  • Change Type with Locale on the Date column to force it to US dates
  • Removed Errors from the Date column
  • Change Type on the other columns

In other words, I do the majority of the cleanup work here based on the single file.  The end result for the Transform Sample query looks like this:

SNAGHTML63c4468

Hopefully this makes sense.  It's a fairly straight forward transformation of a CSV file, but rather than doing the work on the files that have already been combined, I'm doing it in the Transform Sample query.  Why?  Because it pre-processes my data before it gets combined.  And the cool thing?  Now we go back to the main query I started with:

SNAGHTML63e5508

The results have already propagated, and we're good to go.

Thoughts and Strategies

Overall, I think the new Combine Binaries experience is a good thing.  Once we know how to modify it properly, it allows us some great flexibility that we didn't have before - at least not without writing our own custom functions.  There are a couple of things we do need to think about now though.

Name Your Queries Early

Under this new method, I'd highly recommend that you change the name of your query BEFORE you click that Combine Binaries button.  All those samples on the left side inherit their name from the name of the query, and Power Query is not smart enough to rename them when you rename your query.  Here's a comparison of two differently named queries:

SNAGHTML988b327

So while I used to think about renaming my query near the end of the process, we really need to think about that up front (or go back and rename all the newly created queries later.)

Where Should You Transform Your Data?

The next big thought is where you should run your transforms… should you do it on the master query (the one with the consolidated results), or the sample used for the transformation function?

The answer to that kind of depends.  A few instances where this could really matter:

  1. Filtering: I would always filter in the sample.  Why spend the processing time to bring in ALL of the data, only to filter it at the end.  Much better to modify the Transform Sample to avoid bringing in the records in the first place.
  2. Sorting: I would suggest that most times you'd want to run your sorts in the master query.  Let's say you have a file for each month, and you sort by date.  If you do that it in the master query, things will be sorted sequentially.  If you do it in the Transform Sample, it will be sorted sequentially within each file, but if the files are out of order, your master will still be out of order until you re-sort by date.  (Apr comes before Jan in the alphabet.)
  3. Grouping:  This one is tricky… most of the time you'll probably want to load all of your transactions into the master query, then group the data.  So if I want to group all transactions by month and account code, I'd do it in the master query.  But there are definitely instances where you'll want to pre-process your data, grouping the data inside the individual file before it lands in your master query.  A case in point might be the daily payments list for payments you issue to vendors.  If they are batched up then sent to the bank, you'd want to import the file with all daily records, group them, then land that pre-grouped data into your master query.  That is an operation that you may wish to do at the Transform Sample level.

The good news is that it is easy to experiment here and switch things up as you need to.

And One More Major Impact…

This blog post has really focussed on how the new Combine Binaries experience changes impacts importing text files.  What it doesn't cover however, is the why it was REALLY built.  I'm going to cover this in tomorrow's blog post: how the new Combine Binaries experience allows one click consolidation of Excel files!

Power Query Dependencies Viewer

The November 2016 update is now out and it finally brings a way to view the Power Query dependencies viewer.  While it’s been out in Power BI Desktop for a while, (as Matt posted about a while ago,) this is huge news for Excel, as this feature has been badly needed.

Viewing Power Query Dependencies

To get to the Power Query Dependencies view, you simply need to perform the following steps:

  • Edit any query (just to get into the Power Query editor)
  • Go to the View tab
  • Click the Query Dependencies button

image

Once you do so, you’ll be launched into the Power Query dependencies windows as shown below:

image

At first glance…

So at first glance, this is pretty exciting and yet – if you work with complicated Power Query setups like I do – you’ll find the Query dependencies view a bit… lacking in some areas too.

First off, if your query is complicated, it really does open that small.  Wow.  Now there is a scaling button down the bottom, but that quickly scales so that stuff is off-screen.  No problem, right?  We’ll just drag some stuff around then… oh… except we can’t.  Dragging any box around drags the entire model, not just that one box.  Sad smile

What can you do with the Query Dependencies viewer?

Rather than focus on the stuff we can’t do, I want to take a look at what we can (although I won’t be able to help making a couple of suggestions here as well.)

Maximizing the model layout

The first thing to point out is that despite the fact that it isn’t obvious, the window is resizable.  If you mouse over any border or corner you’ll get the arrows that indicate you can left click and drag to make the window bigger.

So normally the first thing I do is:

  • Move the window to the upper left of the screen
  • Drag the bottom right corner to make the model fill the entire screen
  • Click the little box icon in the bottom right corner by the scroll bar to “Fit to Screen”

After all, the reason I’m using this view is because the models are big!

Some things that would be really useful here:

  • It would be awesome if there was a Maximize button near the X in the top right (like the Power Query window and every other app has.)
  • It would also be good if we could double click the title bar and have it maximize the window (again, like so many apps out there.)

Either (or both) of those features would save me a lot of time.

Alternate Views for Tracing Query Dependencies

In the default view, the data sources are plotted at the top, and the queries cascade down below.  Fortunately you’re not stuck with this view, there are four different ways to display the model:

image

In this instance I’ve chosen Left to Right, which puts the data sources  on the left and fans the query dependencies out to the right hand side.

Honestly, if I had my preferred way it would probably be to use Bottom to Top (data sources at the bottom and data model tables on the top.)  To me this should basically “bubble up” the model tables to the top of the screen.  Unfortunately it doesn’t quite work like that… all we can guarantee is that the data sources will be at the bottom, but the model tables could end up anywhere.

Ideally, I’d love to have an option to force the Data Sources to be lined up based on the first choice in that menu, and the Load Destinations (whether table or data model) be lined up in the viewer based on the option chosen for the second choice.  This would allow me to easily see the “From” and “To”, with the chain of what happened in between.

Tracing Query Dependencies

In the image below (click on it to see the larger version), I’ve selected one of the tables in the middle of the query dependencies tree:

image

The effect is that it highlights all child and dependent queries in the data flow.  That’s cool, and I’m okay with this being the default behaviour when I select a query step.  Wouldn’t it be cool though, if we also had:

  • A right click option to trace precedent queries only
  • A right click option to trace dependent queries only

Those would be super helpful in tracing a queries flow without the extra noise, something that is really important in able to quickly dig in to the key factors you probably want to know about your query dependencies.

Identifying Load Destinations

So the very first thing I did when I threw this specific model into the query dependencies view was identify two queries that were not in the query chain.  “Awesome,” I though, so I went and deleted them.  Then I restored from backup, as one of them was in use!

Don’t get me wrong, the view was correct, it’s just that the distinction for load destinations is so weak that I saw no arrows and assumed it was good to be removed.  As it turns out, the words actually matter here:

image

The Day Types table is created from a hard coded list.  Since there are no queries flowing in or out of it (it is floating above the lines) I nuked it.  I missed the fact – especially with it being on the left), that it was actually loaded to the data model.

Raw Data-Departments, on the other hand, is pulling from the Current Workbook and is loaded as “Connection Only”.

So here’s my thoughts here:

  • I’d love to see nodes that are loaded to worksheets or the data model identified.  Either an icon in the top right, or a shading  in place would be ideal.  Something that makes them a bit less subtle than they are today.
  • I’m not a fan of the “Not loaded” term… it’s about as awesome as the “Load Disabled” that Power Query used to use about two years ago.  This should – in my opinion – be consistent with the UI and should read “Connection only”.  Not loaded makes it look like it isn’t working.

Navigating Query Dependencies

One of the issues I had above is that my Day Types table – being standalone – should not sit on top of any arrows… that’s just scary bad and misleading… but that’s actually part of a much bigger issue as this is kind of the style used throughout the entire tool:

image

This also leads me to another issue in that I need to be able to follow these arrows.  Today the only ability you have – because you can’t move the boxes – is to essentially print the query dependencies window (you’ll need screen capture software for that since there isn’t a print button) – and trace with a highlighter.

What I’d love to see in this instance is the ability to select a single (or multiple arrows) and have them turn bold.  It would be an even bigger bonus if they shaded the tables on each end of the arrow and allowed you to select multiple arrows.  That would actually solve a few issues mentioned earlier too, allowing us to really drill into the relationships we need to trace.

Overall Impressions of the Query Dependencies Viewer

Overall it’s a good first version.  I’d really love to see some (or all) of the improvements I mentioned above, but it’s a HUGE amount better than what we had a month ago.  Smile

Extract Data from a Mixed Column

More and more I’m seeing examples where people are trying to extract data from a mixed column.  In other words, they have two data types in a single column, but need to find a way to extract one from the other.

Examining the issue

The sample data I’m using can be downloaded from this link.

I’m going to use Power BI Desktop for this, but the results will look identical in Excel using Power Query (except for the colour, of course.)

So let’s get started:

  • Get Data (new Query in Excel) –> From CSV –> MixedDataInColumn1.csv
  • Promote First Row as Headers

The issue can be seen in the red circles below… the report author injected the name of each vendor for the parts above their first part in the list.

image

So the issue here is how to extract the vendor name from Part No column.  The problem is that there isn’t any obvious way to do this.  We have different textual values in all columns, which could change over time.  There’s really nothing that we can test for reliably in this case.

How to Extract Data from a Mixed Column

There are actually a few different ways to extract data from a mixed column… a few of which we demonstrate in our Power Query workshop.  I’m going to show just one here.

Step 1 – Identify a column with a pattern you can exploit

The key we are really looking for is a column which has values or dates for all rows other that the one with our vendors.  In this case we actually have two: Part No and Cost.  Both have text on the Vendor lines, but what looks like values on the rest.  The challenge we have here is that we can’t always guarantee that Part No won’t have text in it.  It’s completely possible we could see a part number like TH-6715 or something. So this leaves us with the Cost column.

Step 2 – Duplicate the identified column

This next set of steps is actually the trick that lets us work this out.

  • Right click the column in question and choose Duplicate Column
  • Right click the Cost – Copy column –> Change Type –> Whole Number
  • Right click the Cost – Copy column –> Replace Errors –> null

You should now have null values where the textual values were located:

image

Step 3 – Use a little conditional logic

We now have something that we can use in order to extract the Vendor name.  So let’s build a little bit of conditional logic:

  • Add Column –> Conditional Column
  • Configure the Conditional Column as follows:

image

The only trick here is to make sure you change the Output to a column so that you can select from the list of columns.

  • Click OK
  • Right click the Vendor column –> Fill Down

The result is shown below:

image

Step 4 – Clean up

We’re now at the point of clean up which entails:

  • Filter the Cost – Copy column to remove null values
  • Delete the Cost – Copy column
  • Set the data types on all columns

The results now look as follows:

image

At this point we can commit the query and we are good to go.

Final Thoughts

This is not a new trick by any means; I’ve been using it for a long time.  The biggest key is really about identifying patterns and thinking outside the box.

It’s unfortunately very easy to get focused on the primary column we want to solve, and lose site of the others.  (Trust me, I’ve been there too.)  Sometimes though, when a column is particularly tough to deal with, we need to just step back and take a look at the bigger picture to see if there is a pattern in another column that we can exploit.  In fact, I’d say that this is probably one of the most important things to master when working with Power Query.

Power BI Slicers

For those coming from the Excel 2013 world, you’ll surely want to create filters using the Power BI Slicers.  After all, you know that Slicers and Timelines are two of the sexiest filters we have for controlling filter context in Power Pivot models.  In this post we’re going to explore the slicer visual, and how to get closer to what you’re used to in Excel.

The Goal

I’m going to fall back to my last project here, and have created a single visual on a blank report page.  You probably remember this one, it’s the map of where I’ve stayed so far this year:

image

What I’d like to do now is add my slicers and timelines.  I’d like a slicers for Country and Province.  Should be easy, no?  Err… no.  Sad smile

The default Power BI Slicers

Now, Power BI Desktop has a Slicer visualization, as you can see here:

image

So what’s wrong with it?  I’ll build two to show you why I’m less than satisfied…  Here’s what I did:

  • Created a Slicer visual
  • Added the Country field
  • Created another Slicer visual
  • Added the Province field

Do these look like Excel slicers to you?  They look a lot more like check boxes to me, not slicers…

image

Now don’t get me wrong, Power BI Slicers certainly work, as you can see here where I’ve drilled in to USA:

image

So let’s look at the difference between the Power BI Slicers and the Excel slicers that my expectations are based upon:

  • The Province field hides all irrelevant items by default, unlike Excel.  I could not find a configuration option to change this.
  • The checkbox thing drives me crazy.  I hate checkboxes in my Windows files list, I don’t like them here, and again it is inconsistent with Excel.  I could not find a way to turn those off.
  • Originally I wanted to show the provinces in a two column slicer, like I can in Excel.  I could not figure out how to make this happen either.
  • Finally, I wanted to show the bubbles like an Excel slicer.  The closest I could get was the image below (WARNING! SUPER UGLY GRAPHIC AHEAD!)  Should you feel the need to create this abomination you need to select the slicer, go to the Formatting roller –> Items –> Outline –> Frame.

image

Gross.  That is just gross.  Honestly, I really don’t understand why the slicer is so different from Excel’s.  That slicer is pretty, and people are used to it.

Not happy with these, I deleted both slicers.

Is all hope lost for attractive Power BI Slicers?

Thankfully, the answer is no.  The Power BI team has given developers the ability to create and distribute their own visuals into the Power BI custom visuals gallery.  So let’s go and pull in a couple of those to fill this gap.

Locating the Custom Visuals Gallery

To be fair, the steps for this could be MUCH easier.  To get here the first time you can either just click this link or follow these steps:

  • Click the ellipses in the Visualizations gallery to import a custom visualization

image

  • Choose to Import a Custom Visual
  • Click the Learn More link

image

  • To be fair, you should probably read the page you’re taken to, as it talks about all the risks of using a custom visual.  (Remember not all custom visuals are provided by Microsoft, many are provided by 3rd parties.)
  • I scrolled straight to the bottom and clicked the link in the See Also section to go to the Power BI custom visuals gallery

You’ll be taken to the gallery, which has a lot of pretty visuals that can be imported into your project.

To make it easier to find custom visuals, I’d recommend you do a couple of things here:

  1. Bookmark this page (making it a bit easier to get back to it.)
  2. Choose to sort the gallery by Name rather than by Most Recent (which is the default)

When you click on a visual it will offer to download a pbiviz file that you can store in a folder.  You’ll want to remember the location, as you’ll need to import the visuals into every new PBI file you create.

I downloaded a specific visual here: the Chiclet Slicer which, ironically, is published by Microsoft.

Importing the Chiclet Slicer

When I returned to Power BI desktop, it’s still sitting at the Import Custom Visual dialog, which is convenient.  So I was able to just click the big yellow Import button, and select the ChicletSlicer file.  Doing so adds a new option to the Visualizations gallery:

image

I created two new Chiclet slicers, one for Country and one for Province, and was pleased to end up with the following:

image

Now that’s more like it!  Certainly needs some tweaking, but better than the past iteration.  So let’s get to that tweaking…  I changed the formatting options for each of the slicers as follows:

  • The Country slicer
    • General –> Columns –> 1
    • Header –> Off
    • Chiclets
      • Unselected Color = very light grey
    • Title –> On
      • Text = Country, Font Color = Black, Alignment = Center, Text Size = 10
  • The Province slicer
    • General –> Columns –> 2
    • Header –> Off
    • Chiclets
      • Unselected Color = very light grey
    • Title –> On
      • Text = Country, Font Color = Black, Alignment = Center, Text Size = 10

And, as you can see, the results are pretty good:

image

A couple of things that I couldn’t figure out here though:

  • I wanted to align the text in my “chiclets” to the left, like in Excel.  Can’t seem to find an option for that.
  • There is a tantalizing option in the “General” section to show disabled items “Inplace”, and an option in the chiclets to set the colour for those items.  I would have expected it to be equivalent to Excel’s “Show Disabled”, but it doesn’t seem to do that.  I have not figured out how to replicate that effect.

Final Thoughts

To be fair, there are a ton of configuration options for the Chiclet slicer, much more than I’m going to cover.  Why this slicer isn’t part of Power BI’s default install is beyond me… especially since it’s published by Microsoft.