Using PowerQuery with Email

At our last MVP summit, I sat in on a session on PowerQuery. Our presenter, who is pretty passionate about it, at one point asked us “Who has wanted to use PowerQuery on their email inbox?” And honestly, I have to admit, I was kind of “meh” to the idea. After all, email is a distraction, and not something I generally like to go back and study. Why would I care how many emails someone sent me, or what time they do? Not interesting.

But oh wait… after fooling around with it when I was bored one day, I came across something pretty damn cool. PowerQuery can do much more than just count emails… it can parse the contents! (Okay, that’s uber-geek for “it can read and examine the contents of the email.”) Why is that important? It’s because PowerQuery is all about sourcing and reshaping data.

We can build this…

Check this out… this chart is created, in Excel, showing the affiliate income that I’ve earned from selling Charley Kyd’s dashboard kits from Exceluser.com.

image

Yeah, okay, it’s not a huge amount of money, but that’s not the point.

… From This…

The point is this:  Charley sends an automated email each time there is a sale, of which the contents are always in the same structure:

image

And here’s how…

Step 1: Create a folder and a rule in Outlook

The first thing I did was set up a rule in Outlook to funnel all emails like the one above into the “Affiliates’’\Exceluser.com” folder.  While it’s not entirely necessary – I could target the inbox itself – this makes the updates run a bit faster as they don’t have to examine ALL of my email.  It’s also good housekeeping.  That folder will now be the ENTIRE source for the chart above.  Even better, I didn’t even have to read the emails myself!

Step 2: Open Excel and create a new Power Query

To do this, you must be running the December 2013 update at a minimum.  (Earlier versions don’t support Exchange as a data source.)  To set it up:

POWER QUERY –> From Other Sources –> From Microsoft Exchange

If you haven’t used an Exchange connection, you’ll be prompted for your credentials, which it will then save for you.  Unfortunately only one Exchange account is supported at a time right now, so if you need to change it go to POWER QUERY –> Data Source Settings to modify your credentials.

Now, for me it pulled up a Navigator task pane on the right with a list of Mail, Calendar, People, Tasks and Meeting Requests.  I personally wish it would take you into Power Query, as this is just one more step to slow me down.  I have to select Mail, then click Edit Query (at the bottom) to go and start building my query.  Once I do so, then I see this:

image

Wow… that’s a lot of columns, but not really where I want to be.  Let’s filter this list down a bit…

I clicked the dropdown arrow beside “Folder Path”, and then clicked the “Load More” button, since there is little in the list:

SNAGHTML38a48c

Perfect, now I uncheck Select All, check \Affiliates\Exceluser.com\ and I’m set… my list will filter to only those emails from Exceluser.com.

Step 3: Removing Irrelevant Columns

There’s a lot of them, so I’m going to get rid of them.  I select and remove each of the following:

“Subject”, “Sender”, “DisplayTo”, “DisplayCc”, “ToRecipients”, “CcRecipients”, “BccRecipients”, “DateTimeReceived”, “Importance”, “Categories”, “IsRead”, “HasAttachments”, “Attachments”, “Preview”, “Attributes”, “Id”

That leaves me with 3 columns in all:  “Folder Path”,”DateTimeSent” and “Body”:

SNAGHTML3d45e9

Step 4: Expand the Email Body

Now here’s where cool starts to get visible… the Body field contains a bunch of entries called “Record”, which are shown in green.  That means they are clickable and, if you do click it, will show you the entire body of the selected email.  But even better is that I can expand those into the table itself.  That funny little double headed arrow next to “Body” does just that.  When I click it, I can see all the items in the record which, in this case includes both the text and html bodies of the email:

image

I’d rather play with plain text than HTML, so I cleared the HTMLBody flag and click OK, resulting in the following:

image

So the body has expanded, and if I click on a specific record it shows the entire body in the window at the bottom.  Very cool, and I can see that this email shows earnings of $12.91.  Even better, as I click through the other records, Charley’s emails are identical, with only the value changing.  This is VERY good.

Step 5: Strip out the Value

Now the above is good, but it contains a bunch of hard returns and stuff.  I want to get rid of those.  So I select the Body.TextBody.1 column then go to Transform—>Text Transforms—>Clean.  That removes all the hard returns, resulting in this:

SNAGHTML46cabd

Now I’m going to do a find and replace to remove all characters before the value.  Since it’s always the same, I got to Replace values and I can replace “Hi Ken Puls,This email is to notify you that you have made a sale.You have earned $” with nothing.

The column will now start with the value, but I need to remove everything after the value.  Easiest way for me was to:

  • Replace ” in commission for this sale.” (notice the leading space) with a pipe: “|”
  • Choose Split Column –> By Delimiter –> Custom –> |

I then made sure that the Body.TextBody.1.1 column was selected and choose to set the Data Type to Number.

Cool, I’ve got my values now!

Step 6: Final cleanup

Finally, I did a bit of cleanup…

  • I removed the Body.TextBody.1.2 column as well as the Folder Path columns, since I don’t need either any more
  • I renamed the Body.TextBody.1.1 column to Commission
  • I renamed the table (on the right) to Commissions

And here it is:

SNAGHTML512197

And finally I clicked OK to load the table to the worksheet.

Step 7: Create the Report

And this part is probably the easiest… I created a simple Pivot:

  • DateTimeSent on rows
  • Group dates by Months and Years
  • Years on Columns
  • Commissions on Values

And then the PivotChart where I removed the Field Buttons, vertical axis and gridlines, and added data labels.  Pretty straight forward if you’re used to working with PivotTables or charts at all.

Final Thoughts

This tool is truly awesome.  All I need to do to get an update on my affiliate status is to open the file and refresh the data.  It will reach into my email folder and do the rest.  Simply amazing, really.

And while the setup may look like a lot of steps, this file took me less than 30 minutes to build, from setting up the rule in Outlook to pulling the data into PowerQuery through cleaning it and reporting on it.  Granted, I’ve been playing around with similar data lately, but still, it was quick from start to finish.

So… “Meh, it’s just email”?  Not any more.  :)

From TXT files to BI solution

Several years ago, when Power Pivot first hit the scene, I was talking to a buddy of mine about it. The conversation went something like this:

My Friend: “Ken, I’d love to use PowerPivot, but you don’t understand the database culture in my country. Our DB admins won’t let us connect directly to the databases, they will only email us text files on a daily, weekly or monthly basis.”

Me: “So what? Take the text file, suck it into PowerPivot and build your report. When they send you a new file, suck it into PowerPivot too, and you’re essentially building your own business intelligence system. You don’t need access to the database, just the data you get on a daily/weekly basis!”

My Friend: “Holy #*$&! I need to learn PowerPivot!”

Now, factually everything I said was true. Practically though, it was a bit more complicated than that. See, PowerPivot is great at importing individual files into tables and relating them. The issue here is that we really needed the data appended to an existing file, rather than related to an existing file. It could certainly be done, but it was a bit of a pain.

But now that we have Power Query the game changes in a big way for us; Power Query can actually import and append every file in a folder, and import it directly into the data model as one big contiguous table. Let’s take a look in a practical example…

Example Background

Assume we’ve got a corporate IT policy that blocks us from having direct access to the database, but our IT department sends us monthly listings of our sales categories. In the following case we’ve got 4 separate files which have common headers, but the content is different:

clip_image002

With PowerPivot it’s easy to link these as four individual tables in to the model, but it would be better to have a single table that has all the records in it, as that would be a better Pivot source setup. So how do we do it?

Building the Query

To begin, we need to place all of our files in a single folder. This folder will be the home for all current and future text files the IT department will ever send us.

Next we:

  • Go to Power Query –> From File –> From Folder
  • Browse to select our folder
  • Click OK

At this point we’ll be taken in to PowerQuery, and will be presented with a view similar to this:

clip_image004

Essentially it’s a list of all the files, and where they came from. The key piece here though, is the tiny little icon next to the “Content” header: the icon that looks like a double down arrow. Click it and something amazing will happen:

clip_image005

What’s so amazing is not that it pulled in the content. It’s that it has actually pulled in 128 rows of data which represents the entire content of all four files in this example. Now, to be fair, my data set here is small. I’ve also used this to pull in over 61,000 records from 30 csv files into the data model and it works just as well, although it only pulls the first 750 lines into Power Query’s preview for me to shape the actual query itself.

Obviously not all is right with the world though, as all the data came in as a single column. These are all Tab delimited files, (something that can be easily guessed by opening the source file in Notepad,) so this should be pretty easy to fix:

  • Go to Split Column –> Delimited –> Tab –> OK
  • Click Use First Row as Headers

We should now have this:

clip_image006

Much better, but we should still do a bit more cleanup:

  • Highlight the Date column and change the Data Type to Date
  • Highlight the Amount column and change the Data Type to Number

At this point it’s a good idea to scroll down the table to have a look at all the data. And it’s a good thing we did too, as there are some issues in this file:

clip_image007

Uh oh… errors. What gives there?

The issue is the headers in each text file. Because we changed the first column’s data type to Date, Power Query can’t render the text of “Date” in that column, which results in the error. The same is true of “Amount” which can’t be rendered as a number either since it is also text.

The manifestation as errors, while problematic at first blush, is actually a good thing. Why? Because now we have a method to filter them out:

  • Select the Date column
  • Click “Remove Errors”

We’re almost done here. The last things to do are:

  • Change the name (in the Query Settings task pane at right) from “Query 1” to “Transactions”
  • Uncheck “Load to worksheet” (in the bottom right corner)
  • Check “Load to data model” (in the bottom right corner)
  • Click “Apply & Close” in the top right

At this point, the data will all be streamed directly into a table in the Data Model! In fact, if we open PowerPivot and sort the dates from Oldest to Newest, we can see that we have indeed aggregated the records from the four individual files into one master table:

clip_image008

Implications

Because PowerQuery is pointed to the folder, it will stream in all files in the folder. This means that every month, when IT sends you new or updated files, you just need to save them into that folder. At that point refreshing the query will pull in all of the original files, as well as any you’ve added. So if IT sends me four files for February, then the February records get appended to the January ones that are already in my PowerPivot solution. If they send me a new file for a new category, (providing the file setup is 3 columns of tab delimited data,) it will also get appended to the table.

The implications of this are huge. You see, the issue that my friend complained about is not limited to his country at all. Every time I’ve taught PowerPivot to date, the same issue comes up from at least one participant. IT holds the database keys and won’t share, but they’ll send you a text file. No problem at all. Let them protect the integrity of the database, all you now need to do is make sure you receive your regular updates and pull them into your own purpose built solution.

Power Query and Power Pivot for the win here. That’s what I’m thinking. :)

Un-Pivoting Data in Power Query

I was fooling around with the latest build of Power Query (for Excel 2010/2013), and I’ve got to say that I’m actually really impressed with this piece. I think the user experience still needs a bit of work, but once you know how, this is really simple.

Whenever I teach courses on PivotTables people want to eat them up, but often they’ve already got their data in a format that resembles the PivotTable output, rather than a format that is consumable by the Pivot engine. Take the following set of data.

SNAGHTML111988

A classic setup for how people think and want to see the data. But now what if the user wanted to see the records by month under the category breakdowns? An easy task for a Pivot, but it would require a bit of manipulation without it. So how do we get it back to a Pivot compliant format?

With Power Query it’s actually super simple:

  • Step 1: Select the data and turn it into a table
  • Step 2: Select Power Query –> From Table to suck it in to Power Query
  • Step 3: Un-pivot it

Okay, so this one takes a bit of know how right now. If you just click Unpivot, you get something really weird:

image 

What actually needs to happen, which isn’t totally intuitive, is we need to click the header for the January column, hold down CTRL and click the February and March column headers:

SNAGHTML18e3d9

Once we’ve done that we click Unpivot and TA-DA!

image

So by virtue of being able to choose one or more columns, it’s actually quite flexible as you can choose HOW to un-pivot, which is uber-awesome. It’s a shame the UI doesn’t help you realize that today, but hopefully that gets fixed in future.

At any rate, you can right click the headers to rename the columns, then click “Apply and Close” and we end up with the un-pivoted source in our worksheet. And now we can create a new PivotTable off that source:

SNAGHTML1b48f1

Even better, if we add some new data to our original report:

SNAGHTML1cfc48

We can then refresh the query on the worksheet and the records are pulled in. With a refresh on the Pivot as well (yeah, unfortunately you can’t just refresh the pivot to drive the query to update) it’s all pulled in:

SNAGHTML1dae1a

Pretty damn cool!

Newsletter and blog updates

Today marked a new milestone for me, as I actually sent out a newsletter to everyone who has ever signed up at Excelguru.ca.  I was little hesitant, but figured I’d give it a go and see how people reacted.

I actually got some really nice feedback from people, which was great, even asking how to get others to sign up.

To that end, I’ve added a newsletter signup button on my Facebook page at www.facebook.com/xlguru.  And, since my blog theme was getting a bit stale, I’ve updated that as well, adding a link on the right hand side here.

I’m not totally sure I’m sold on the new theme, but let me know what you think.

PowerPivot training live in Victoria, BC!

Anyone who follows my website or Facebook Fan Page knows that I’m a huge fan of PowerPivot. Well great news now, that you can come and learn not only why this is the best thing to happen to Excel in 20 years, but also how to take advantage of it yourself!

I’ll be teaching a course on PowerPivot and DAX in Victoria, BC on November 22nd, 2013. While the course is hosted by the Chartered Professional Accountants, it’s open to anyone who wishes to subscribe.

If you’ve been trying to figure out how to get started with Power Pivot, you’re confused as to how and why things work, or you want to master date/time intelligence in PowerPivot, this course is for you.

100% hands on, we’ll start with basic pivot tables (just as a refresher.) Next we’ll look at how to build the PowerPivot versions and why they are so much more powerful. From linking multiple tables together without a single VLOOKUP to gaining a solid understanding of relating data, you’ll learn the key aspects to building solid PowerPivot models. We’ll work through understanding filter context, measures and relationships, and finish the day by building measures that allow you to pull great stats such as Month to Date for the same month last year, among others.

Without question, you’ll get the most out of this course if you’re experienced with PivotTables. If you don’t feel like you’re a pro in that area though, don’t worry! As an added bonus, to anyone who signs up via this notice, please let me know. I’ll provide you with a free copy of my Magic of PivotTables course so that you can make sure you’re 100% Pivot Table compatible before you arrive.

This is a hands on course, so you need to bring a laptop that is pre-loaded with either:

  • Excel 2010 and the free PowerPivot download
  • Excel 2013 with Office 2013 Professional Plus installed (yes the PLUS is key!)
  • Excel 2013 with Excel 2013 standalone installed.

If you have any questions as to which you have installed, simply drop me a note via the contact form on my website, and I’ll help you figure it out.

Full details of the course contents as well as a registration link, can be found at http://www.icabc-pd.com/pd-seminars-seminar.php?id=2849. Don’t wait too long though, as registration deadline is November 14th!

Hope to see you there!

Excel Power Map Sample

Yesterday, Microsoft released a preview of Power Map; a geo-spatial mapping Excel add-in, formerly know as GeoFlow.  It’s a pretty cool add-in that allows you to plot data based on geographic identifiers (longitude, latitude, country name, town names, postal codes, etc), and show it on a 3D or 2D map.  It works with tabular data sets, whether they be from and Excel table, database or Power Pivot.

The other cool thing is that, once you get it right, you can actually produce a video that can be shared with non-Excel users!  I’ve uploaded one of those to YouTube so you can check it out.

This video shows the wind speeds of hurricane Sandy as it travelled through the Caribbean to it’s eventual landfall on the USA’s east coast.  I used a heat map to show the speed (the redder it gets the faster), and played it over time so you can trace the path.

Pretty cool stuff!

If you have Excel 2013 Pro Plus, you can download the preview here:  http://www.microsoft.com/en-us/download/details.aspx?id=38395

OFFSET or Named Range – Which would you use?

I’m working on a spreadsheet where users will be able (required) to insert new rows at a later date.  When they do so, it’s critical that the section subtotals always… well… subtotal correctly.

The challenge, of course, is that you can’t rely on newly inserted rows being picked up by the subtotal formulas, so someone needs to check them.  At least, you can, but it takes more than just a SUM or SUBTOTAL formula to get it done.

I reached back to the method using a named range that I describe in the “Always Refer to the Cell Above” Excelguru KB Entry, resulting in a formula that looks like this:

SNAGHTML958c98

Of course, I don’t actually need to use the named range to do this.  I could make it work by using the OFFSET function in L66 as follows:

=SUBTOTAL(9,L62:OFFSET(L66,-1,0))

Either will work just fine, and will not be tripped up by a user inserting a new row within my boundary, so I should never (okay, never say never) run into an issue with this particular problem.

I’m curious which method you would use?  Named Range or OFFSET, and why…

A review of Add-in Express

A while ago I started transitioning from VBA to VB.NET again, attempting to build a tool to manipulate the new Power Pivot components in Excel 2013. While I was able to get part way to a working solution using Visual Studio 2012 and VSTO, I ran into two key issues:

1. VSTO seems to be “bit” specific, meaning that I would need to keep one version of the solution for 32 bit versions of Office, and another for 64 bit versions, and

2. While I could run my code, the Power Pivot engine would crash on me unless I opened Power Pivot BEFORE I ran any of my code.

Both of these issues were rather severe to me, as I didn’t want to maintain multiple versions of the same code, nor could I release something and expect users to open Power Pivot before running my project. A friend of mine suggested I try Add-in Express to deal with these issues.

In the spirit of full disclosure, I contacted them and asked if they had a trial version. They don’t, but offered to let me trial it if I’d blog on my experiences. I agreed to do that, and what follows is my honest observations of the software.

I do want to preface that, like my friend Rob Collie, I am not a “Read the manual” kind of guy. Ironic, since I write a lot of material, but I take the approach of diving in, and hitting Bing for a quick pointer when I get stuck. It usually causes me a lot of pain, but I tend to learn better that way.

So, here’s how it all came together for me…

When you install Add-in Express, you get a new set of templates. I started my project by creating a new ADX COM Add-in targeting the .NET Framework 2.0. (For those not in the know, in order to target Excel 2010 with VSTO, you need to use .NET 4.0 and Excel 2013 is .NET 4.5. So .NET 2.0 is way too early a framework to have ever even heard of Power Pivot!)

adx

From there you pretty much follow the prompts through the setup. I called it “Connect”, I set the minimum supported version to Office 2010 (since PowerPivot didn’t exist in 2007), I chose a Visual Basic Project, I selected Excel, and I left the rest of the settings at their defaults. Pretty easy, that part.

Once I had a project to work with, I created a new VB Module, just like normal, and built the code I would need to refresh my PowerPivot model and all the PivotTables. (Be aware that, for simplicity of the post, this is 2013 specific code, and will not work with Excel 2010.)

The key piece in this is making sure the xlApp declaration is correct, as you need to refer to the AddinModule portion in order to bind it to Add-in Express’s handlers, instead of just binding to the Interop.Excel objects. That change is what makes Add-in Express work:

  1. Imports Excel = Microsoft.Office.Interop.Excel
  2.  
  3. Module Module1
  4.  
  5. Public xlApp = Connect.AddinModule.CurrentInstance.ExcelApp
  6.  
  7. Public Sub RefreshPowerPivotTables()
  8.  
  9. Dim xlWorkbook As Excel.Workbook = xlApp.ActiveWorkbook
  10.  
  11. Dim ws As Excel.Worksheet
  12.  
  13. Dim pvt As Excel.PivotTable
  14.  
  15. 'Attempt to connect
  16.  
  17. Try
  18.  
  19. xlWorkbook.Connections("ThisWorkbookDataModel").Refresh()
  20.  
  21. Catch ex As Exception
  22.  
  23. 'Data connection could not be refreshed.
  24.  
  25. MsgBox("Sorry, but I could not refresh the model! Are you sure this workbook has one?")
  26.  
  27. Exit Sub
  28.  
  29. End Try
  30.  
  31. 'Refresh all PivotTables
  32.  
  33. For Each ws In xlWorkbook.Worksheets
  34.  
  35. For Each pvt In ws.PivotTables
  36.  
  37. With pvt
  38.  
  39. .PivotCache.Refresh()
  40.  
  41. .RefreshTable()
  42.  
  43. End With
  44.  
  45. Next
  46.  
  47. Next
  48.  
  49. End Sub
  50.  
  51. End Module

So far so good. Now I needed a user interface.

To be fair, it took me a bit to figure this one out. Once I finally realized that you need to open the “AddinModule.vb” portion in the solution explorer, then add a Ribbon tab to the canvas, then things got easier.

Despite reading very little documentation, with a little help from the blogs and articles on the Add-in Express site when I did get stuck, I was very quickly able to build a simple UI. There’s a good article on doing this here, which I wish I’d read earlier in the process.

As I say, it’s a really simple UI: a tab called “Model”, a single group, and a button with an image on it.

One criticism I do have is that it would be nice to be able to link the buttons and other controls to their callbacks inside the visual designer. As is, it’s a bit clunky, as you have to select the designer, then choose the other controls in the properties window. It’s not totally intuitive, but once you know where to look (read the article linked to above), it is workable.

My callback code for my button (which I didn’t bother renaming) is as follows:

  1. Private Sub AdxRibbonButton1_OnClick(sender As Object, control As IRibbonControl, pressed As Boolean) Handles AdxRibbonButton1.OnClick
  2.  
  3. Call RefreshPowerPivotTables()
  4.  
  5. End Sub

And with that done, it came time to debug. Again, fairly straight forward:

  • Build -> Build the project
  • Build -> Register ADX project
  • Make sure Excel is closed
  • Start the debugging engine

My “Model” tab showed up, with the command I’d built, as shown below:

onLoad

And with a single click (and a bit of a wait since Power Pivot is so slow), my Power Pivot data was refreshed, and the PivotTable updated to reflect the changes I made in my database. Notice the new customer and the new sales transactions for 8/1/2013:

onRefresh

But the best part is this:

  • The solution is deployable to both 32 bit and 64 bit Office platforms, and
  • I can open my Power Pivot project even if I run my code first, and it doesn’t crash.

I can honestly say that I fought that Power Pivot crash issue for about 2 months with VSTO, and I was really worried that it was going to kill my project completely. No amount of searching would turn up a fix, and other help calls didn’t yield any gold either; where they were answered, it was with a “don’t know” answer. Add-in Express has actually made this goal achievable.

I’ll also tell you that, while refreshing Power Pivot isn’t the focus on my full project, I have been able to use Add-in Express to successfully target and manipulate Power Pivot in both 32 and 64 bit versions of both Excel 2010 and 2013. I.e. multi version deployment with one code base. Pretty damn awesome.

I should also mention that their support has been phenomenal as well. Not only have they answered my emails, but I even ended up on a call with one of their lead people to examine why I didn’t seem able to use the debugging tools at first (a blog post for another day). 30 minutes, problem solved, and I’m good to go. Again, pretty damn awesome.

Readers of this blog will know that I don’t endorse very many products at all. Sure, I use Google Adwords and stuff, but I don’t write too many blog posts talking about how awesome a product is. Here’s my word on Add-in Express:

I’m sold. This product has been a life saver, and I won’t develop using VSTO.

Conditional Formatting Formula Inspector

David Hagar made a comment in my last blog post that I should add a userform to the code that I built there.  Since I had to use it on 40 workbooks, and figured that it could come in useful again, I did exactly that.  If you’d like to try it, you can download a copy.

It needs to unzipped and installed as an add-in, at which point you’ll end up with a CF Inspector button on the Formulas tab:

image

That takes you into the userform where you can type your text and click Find.  It will search all conditional formatting formulas in the workbook for your text, and return the list.

image

Once you select and item, it’s Applies To address and the formula will show in the boxes below so you can see them all.

It’s pretty simple, but could be useful since this kind of facility doesn’t exist in Excel by default.  Yes, it would be nice if it would do the replace as well, but given the stability issues I mentioned in the last post, I wasn’t going to try it on my workbook at this point.  Maybe in future if I added a backup facility first.

Let me know what you think!

Find and Replace In Conditional Formats

Yesterday I made a comment on twitter that I was trying to figure out if it would be faster to check the conditional formatting formulas in 40 workbooks manually to ensure they were correct, vs writing a tool to do it.  My buddy Jon Peltier tweeted back saying “Don’t know about faster, but it’s got to be more fun to code it.”  I thought about it, but then when I found that one workbook returned a count of 250 rules… the answer was to code it.

Well, yes, Jon was right… if your definition of fun is akin to performing home dentistry for your root canal.

The code I used to modify my conditional formats has been incredibly unreliable, and is returning different effects depending on the users’ machine that it is run on, the version of Excel, and maybe a few other things.  (It did different things after I had lunch too!)

I finally gave up on re-writing the conditional formatting rules, and ended up just running a macro to tell me which formulas I needed to edit:

  1. Sub FindCFIssues()
  2.  
  3.     Dim ws As Worksheet
  4.     Dim ftc As FormatCondition
  5.     Dim sFormula As String
  6.     Dim sFind As String
  7.     Dim sReplace As String
  8.     Dim lCount As Long
  9.  
  10.     On Error Resume Next
  11.     sFind = "ops-Internal"
  12.     sReplace = "ops_Internal"
  13.  
  14.     For Each ws In ActiveWorkbook.Worksheets
  15.         For Each ftc In ws.UsedRange.FormatConditions
  16.             With ftc
  17.                 sFormula = Replace(.Formula1, sFind, sReplace, 1, compare:=vbTextCompare)
  18.                 If .Formula1 <> sFormula Then
  19.                     Debug.Print ws.Name & ": " & .AppliesTo.Address
  20.                     lCount = lCount + 1
  21.                 End If
  22.             End With
  23.         Next ftc
  24.     Next ws
  25.  
  26.     If lCount > 0 Then
  27.         MsgBox "All done, a few issues were found", vbCritical
  28.     Else
  29.         MsgBox "No issues!", vbInformation
  30.     End If
  31. End Sub

Ultimately, this code searches for “ops-internal”, and flags my attention, because it should be “ops_Internal”.

Some weirdness here too… the “Next ftc” portion of the code runs until it’s gone through all the rules, then errors.  I had to add the On Error Resume Next above in order to let it continue, as it choked with just “Next” inside the other loop.

So at least I’ve got something now that quickly tells me if I have an issue, and prints out the offending rules in the VBA Immediate window.  I’d rather have the replacement done, but really don’t have the time to work through all the idiosyncrasies… err.. I mean fun… that will be involved in getting it right.