Thoughts on the real meaning of “answers accelerated”


Adoption, Opportunity and Participants in the Global Cloud Analytics Market

We are in the middle of a series on the syndication of analytic applications in a cloud environment. As a reminder, we have defined syndication as the ability for a client to quickly and easily provision a cloud-based analytic environment for their own users, grant user access, control the data the user can see, and even enforce the analytics which the user will be able to produce. This post analyzes the market for cloud based analytic applications and addresses the lack of consideration given to syndication.

Businesses have followed a distinct adoption curve over the past several years. Those businesses that began to adopt the Internet in the mid-90’s have generally progressed up the adoption curve in a distinguishable pattern. In a comprehensive study of Internet usage patterns in the business market, Cognetics, Inc. recognized the following five distinct stages of Internet adoption:

Stage One: Access – The business is uses the internet for basic functions such as email and web-based research.

Stage Two: Procurement – The business purchases items (e.g. travel services, office supplies) on the Internet

Stage Three: Presence – The business has a website and social media presence.

Stage Four: Commerce – The business sells products or services on the Internet.

Stage Five: Services – The business uses the Internet in a sophisticated way to obtain applications services (e.g. analytic applications)

It is important to note that these stages of Internet adoption are progressive. Although it is possible for businesses to adopt several of these stages simultaneously or even “skip” a stage (for example, some services businesses may not have services which can be sold or delivered online, so those companies may skip the “Commerce” stage), businesses almost always progress through these stages, and rarely (if ever) revert backwards through these stages.

Generally speaking, the same businesses that start off with Internet access and procurement right away are the same ones which progress through to the Stage Five level of advanced application usage. This pattern of progression is what leads industry analysts to project dramatic levels of growth in cloud based applications as businesses proceed through the adoption cycle.

Because many businesses are still progressing up the Internet adoption curve, the market opportunity for cloud based analytic players is poised for tremendous growth from today’s relatively modest base. In fact, Market Watch in a soon to be released report predicts the global cloud analytics market will reach $26 billion dollars by 2023 with a CAGR of 18%.

MarketWatch also named as major market participants: Oracle, SAP, Microsoft, HP, IBM, SAS, Teradata, Google, Informatica, Salesforce, Tibco and MicroStrategy. Nine of these companies have more than one software offering, often advertising integration with each other and classic analytic vendors like Tableau and QlikView. Salesforce is a niche player specializing in CRM analytics. Tibco’s main analytic program is Spotfire™, and they use other companies’ consultants to deliver their solutions.

While there may be room for disagreement among analysts and categories, none have been able to elucidate, much less isolate and quantify the market for analytic syndication support. For example, other market research firms like Gartner cast a wider net, including, for example, firms like ones we mention above, none of these vendors offer syndication as an option.



The Value Proposition Behind Cloud Based Analytic Applications

We are in the middle of a series on the syndication of analytic applications in a cloud environment. As a reminder, we have defined syndication as the ability for a client to quickly and easily provision a cloud-based analytic environment for their own users, grant user access, control the data the user can see, and even enforce the analytics which the user will be able to produce.

Cloud applications are less expensive than conventionally purchased analytic software, easier to use, require less commitment, and enhance business processes and competitiveness by using the Internet. These value propositions are particularly attractive for business customers.

The value propositions include:

1. Worldwide 24x7 access to applications.

Cloud based analytic applications which are fully syndicatable are accessible anytime, from anywhere in the world, using an Internet connection and a web browser. This can be especially important if a business and its partners are located in several countries. For example, remote users, traveling employees and business affiliates can all get easy and fast access to the application.

2. Reduced Capital and IT expenditures

Cloud based analytic applications which are fully syndicatable don’t require individual offices or locations in which to set up their own servers or networks. Hosting comparable network-based version of analytic applications requires a significant investment in IT infrastructure, people, and processes which are not at the core of most companies’ competencies.

3. Enhanced Working Relationships and Productivity

Cloud based analytic applications which are fully syndicatable can create great gains in productivity for short-term projects, for working together with partners, customers or suppliers, and for keeping a geographically dispersed workforce in sync. The distributed nature of the Internet allows unique communication advantages, with outstanding results.

4. Rapid Application Deployment

Cloud based analytic applications which are fully syndicatable can be rented and deployed within a few minutes, using a browser.

5. Reduced Maintenance/Admin Costs

It only takes a few minutes for a customer to deploy an application for a new user over the Internet. Application “instances” and secure databases are self-provisioned. Application code is updated and enhanced frequently in the runtime environment. Users can choose self-service administration options, authorized restriction levels, etc. Businesses, workgroups, and remote departments can provision business analytic applications across the web and thereby substantially reduce local IT investments.

6. Reliable 24x7 monitoring and self-support features

In this model, availability, billing and online customer support features are provided with high reliability, scalability and security and come free of charge for each new user deployed.



What the Self-Provision of Cloud Based Analytic Applications Means

We are in the middle of a series on the syndication of analytic applications in a cloud environment. As a reminder, we have defined syndication as the ability for a client to quickly and easily provision a cloud-based analytic environment for their own users, grant user access, control the data the user can see, and even enforce the analytics which the user will be able to produce. This post focuses on the capabilities required for any cloud-based vendor of analytic software to effectively allows customers to self-provision analytic environments.

In order to sell analytics effectively to the business market, applications must not only meet these requirements. That “syndication” approach must also be simple and affordable. For a vendor to offer simple, affordable analytic applications but still sustain a good analytic business, the vendor must be able to deliver these applications to the client in a way which the client can mass-customize. This is an approach which allows end users to customize applications within defined and automated parameters. Mass customization must allow for reduced human intervention and capital outlays since application pricing sensitivity puts pressure on vendor margins in this marketplace. Consequently, large volumes of end users are required to make the business model attractive. In a cyclical effect, this volume requirement further reinforces the need for low price points, since lower price points drive higher volume demand.

Effective mass customization of applications requires:

that users can sign up and desubscribe at will, without minimum time commitments or penalties for early cancellation;

that rental and administration of the application is done by the end user, through a browser, with no need for human intervention in the sales or setup process;

that setup charges are minimal (if extant at all);

that applications function in both a “public cloud” environment (meaning that multiple, simultaneous users of multiple applications are supported on a single server), and a (“private cloud”) meaning each business has their own environment;

that price points are low enough to obtain volumes which are sufficient to achieve acceptable business return; and

that applications are maintained and upgraded automatically, without user intervention.

As cloud based vendors begin to recognize the opportunity in the business analytic market, more of them will attempt to adopt similar mass customization models. A new variant on the cloud model is emerging. This variant embraces mass customization by making it possible for end users to rent applications in a few minutes on the internet. In this model, applications are also provisioned and administered by the end user, and typically require no minimum time commitment (i.e., they are cancelable at will). These applications are excellent for business in that they require little learning and customization, no IT staff, and are eminently affordable. Easy to use analytic applications are ideal for many, but a long term relationship based on syndication would also be appealing to both customers and vendors.



Problems with the Traditional Cloud Based Analytic Model

We are in the middle of a series on the syndication of analytic applications in a cloud environment. As a reminder, we have defined syndication as the ability for a client to quickly and easily provision a cloud-based analytic environment for their own users, grant user access, control the data the user can see, and even enforce the analytics which the user will be able to produce.

Unfortunately, many cloud analytic vendors still maintain expensive elements of the software sales model, including face-to-face selling processes, dedicated account representatives, dedicated servers and manual application provisioning. These vendors are often engaging in something of a “shell game,” trading many of the costs involved in customer premise installations for essentially the same costs in an outsourced cloud environment. In some cases, this model can result in meaningful cost savings, but often the real value for customers of this model is the less tangible (but still meaningful) benefit of faster implementation and diminished need for internal IT complexity.

However, this cloud based analytic model does not truly take advantage of the economies of scale for software application rental which are made possible by the concept of syndication. Because a cloud based analytic implementation in the traditional model requires so much human intervention and customization, the price is prohibitive for all but a very few businesses. For this reason, cloud software implementations, especially those geared toward the analytic user have traditionally focused on packages used in large enterprise.

These analytic applications fall down when it comes to self-service at the client level. For example, they do not support client’s ability to provision instances for end users themselves, they do not support a client’s ability to access controls to dictate the level at which a user is authorized, and they do not support a client’s ability to control which analytics are used to support the user environment. It is little wonder, then, that client’s experience with customizing analytic applications in the cloud environment is frustrating. This process should be easy, but little seems to have changed since the days of premise-based installations of analytic applications.



A Brief History of Rentable Applications and the Emergence of the Analytic Syndication Model

The idea of renting applications is not new. In the 1970’s and early 1980’s, the concept of renting software from a service provider was called “time-sharing”. While “time-sharing” made computing more affordable, it was still only within reach of large enterprises. In the late 1980’s and early 1990s, the same idea was touted as “network computing,” and the falling prices of computing equipment made that possible for more and more companies.

In the early 2000’s the concept was called “application service provision” (ASP) and the emergence of the internet made that idea even more popular. The second decade of the 2000s the same idea was called “SaaS,” and as more and more applications became available, more and more business shifted to using applications. Still, many businesses were reluctant to take advantage of this model.

The term cloud computing was coined to describe vendors who provide these services. Business who were previously reluctant to access applications over the internet now became more comfortable doing so. The most security conscious business were able to address their concerns by establishing private clouds.

Now, businesses that previously could not afford powerful software applications find that the rental model makes some of these applications accessible to them. Furthermore, many companies have been able to save money by relying on “cloud vendors” to manage and maintain applications previously managed by in-house IT staff.

The increasing sophistication of users of cloud based applications will invariably lead to increasing demands for features and functions. This mimics the premise-based application software model, the more companies become familiar with applications the more customization they desire. Many cloud vendors today are already experiencing this phenomenon. Agylytyx predicts the logical conclusion of this trend is to shift the customization capability to the end user – in fact we believe this client “on demand” ability to customize cloud based applications will be the next major trend in application rental.

This trend will significantly impact the analytic vendors. Not everyone in a company should have access to all data. Provisioning cloud-based analytic environments, granting user access, controlling the data, and even enforcing the analytic which will be produced, will allows companies to do this for multiple users in the future. This future is here today – the Agylytyx Generator™ supports these capabilities today.



The Nature of the Cloud Based Analytic Market

Worldwide, there are hundreds of thousands of analytic consuming businesses that fall into almost every imaginable industry segment and business size. In most companies, the use of analytics occurs in multiple business disciplines. For a few companies, analyzing data and reacting immediately is a mission critical real-time exercise – some ecommerce and IOT implementations, for example.

The factors that make a segment attractive in this market are typically not common to the segment itself but are common to enough developing cloud analytics to make them all attractive. These characteristics, which occur in greater proportion in certain segments, include high propensity-to-adopt qualities. These qualities include specific cloud related features such as high bandwidth utilization, repeated cloud purchasing, and multiple users of online applications.

Among all segments, ease of use is one of the most important and common characteristics of online analytic consumption. Analytic producing business customers who use cloud based applications most aggressively are also the most comfortable establishing multiple users accounts for accessing analytics online.

However, these same businesses often do not have an extensive IT infrastructure and gravitate toward simplicity and ease of use in the analytic environment. For that reason, a platform that makes is easy to deploy analytic instances and allows automated access will make it affordable for content producers to reach the mass market.



Syndicating Cloud Based Analytics – an Introduction

We are starting a new series we are particularly excited about because it will give us a glimpse into the future. We explain what cloud based analytic syndication is, and why it will be so important. To help us predict future trends we actually start with a short introduction into the history of rentable applications. This first post provides a brief overview of the series.

Although very few, if any businesses consume analytic content from the Internet today, these are the same businesses that first became comfortable with SaaS applications years ago. The same businesses who eventually followed their lead and use SaaS applications are the same ones who will syndicate analytics in the future. For this reason, analysts estimate that the market for syndicated analytic content will swell from a relatively modest base today to a multi-billion-dollar industry within a few years.

A few companies have begun to establish themselves as pioneers in this marketplace. Many of them embrace different business models and different components of the value chain for analytic production and delivery. This article series r argues that the most viable long-term approach to this marketplace is the position of syndication facilitator – defined as the provider of a platform where multiple companies can provide their users with private access to a select portion of their syndicated content.

This article series also illustrates how Agylytyx Networks has embraced this role by developing the core competencies necessary to establish itself as a viable long-term syndicator of analytic content serving the business marketplace. The article series concludes by detailing the infrastructure and business processes in which Agylytyx has invested in order to establish these core competencies.



Using Machine Learning to Optimize IoT Networks – Overview

It is hard to ignore the growth of Internet of Things (IoT) market in recent years and the outlook. Forbes predicts that the IoT market will grow to $457 billion by the 2020. Leading industries like manufacturing, logistics and transportation have been the major contributors to the investment. The use of IoT networks is going mainstream Gartner predicts that more than 65 percent of enterprises will adopt IoT products by the year of 2020.

One of the chief benefits of IoT networks is the ability to capture data. As result of this fact, IOT big data collection and processing continues to explode along with the growth in IoT adoption. Millions of devices ingesting real time series data into analytical systems provides enterprises the capability to make data-driven intellectual decisions. A typical IoT solution pipeline consists of five distinct stages.:blog128

The third stage, “Transformation and analytics” in this activity chain is the heart of this process. This stage encapsulates the real business value in IoT networks. It is the stage, for example, in which enterprises inspect data and verify that it is decision-ready. In this way enterprises can ensure that data collected from IoT networks is able to directly influence the optimization of business flows. How the data is processed (including hygiene and parsing) how quickly information can be retrieved from the data, and sufficient access controls are key questions in this stage.

This is where the machine learning and artificial intelligence plays a special role. The ability of a system to monitor IoT networks (including the data generated by them) and make cognitive decisions based on this historical data will greatly increases the value of any solution.

Machine learning has become a top concept in technology world in recent years. This term has been evolving since the late 1950s. However, due to limited accessible data and computational power, machine learning did not realize its full potential until the era of big data (2010s). With the power of cloud computing and advanced algorithms, machine learning now can be applied to almost all types of data analytics, including IoT. Technologies like Azure Machine Learning™ and Google Cloud Machine Learning™ employ supervised learning techniques to help make business decisions based on classification, regression, and anomaly detection from within IoT data.

One of the prerequisites of machine learning is big data collection which can be easily achieved in IoT systems. The following are some common cases where machine learning works together with IoT to optimize businesses:

Anomaly monitoring — Machine learning can be used to detect anomalies in time series data like spikes and dips, positive and negative trends n be detected using a machine learning algorithm monitoring the live stream of device feeds. The value proposition for measurable improvements in thing like inventory cannot be underestimated.

Predictive maintenance — Predictive maintenance can directly impact the costs of operating an enterprise, which makes it one of the most popular machine learning solutions. The ability of machine learning algorithms to foresee possibility of a device failing, extend the life of an expensive piece of equipment, and zero in on the root causes of device failure can enable the business to optimize operational cost by reducing the maintenance time significantly.

Vehicle telemetry The capability of machine learning solutions to ingest millions of events from vehicles to improve their safety, reliability, and driving experience makes it a desirable technology to adopt for transportation and logistics industries. These are a few examples of how IoT networks are becoming mainstream. It is no wonder then, that the market for IoT networks is becoming so prevalent. As the examples above demonstrate, optimizing business processes using IoT networks continues to drive growth of the market.

Agylytyx can help you achieve your IoT network goals. Read our IoT use cases or contact us today for a free assessment of IoT network deployment at your company.



Portfolio Management – Optimization Using Multiple Dimensions

In the previous post we discussed the famous “knapsack problem” in the context of portfolio management and introduced Agylytyx Optimizer’s ability to solve such a problem. In this post, we will reach forward to a more complex scenario – a knapsack problem with more than one constraint. We will refer to this problem as the “multi-dimensional knapsack problem.”

In –the classic knapsack problem, our goal was to maximize the value of items you are packing for an emergency evacuation of a home, where the only constrain was the weight – the total weights of packed items cannot exceed the capacity of the knapsack. Similarly, we noted that the total cost of a portfolio usually cannot exceed certain budget level (colloquially referred to as “affordability”). However, in real-world business, affordability is not the only factor for project selection. Corporations often consider other factors when selecting projects. For example, projects may have interdepencies like resources including people and technologies. When managing portfolios, these other constraints must be assumed as well.

To illustrate this point, assume portfolio A is, under the simple knapsack view, the most efficient regarding its cost and benefit. Further assume that in this portfolio there are couple of projects requiring an entire team utilized within a certain period. Further assume that the budget request for all projects s requires 8 months of full dedication of the same team. In this case, there may be enough money for both projects but the requirement to work the same team means that it is impossible toto implement both projects simultaneously. In other words, you can’t accomplish the job of Portfolio A within one year.

To avoid this situation, when selecting projects, one will also have to take dependencies like team allocation into consideration along with budget. Therefore, the problem contains a new constrain of resources other than budget, and this simple knapsack problem has become a multi-dimensional knapsack problem.

To solve optimization with more than one constraint, it is possible to simply ablog127dd a constraint variable in Excel Solver, but as we looked at last week Solver requires a lot of repetitive work that makes scenario planning untenable. The best solution is to use Agylytyx Optimizer to generate portfolios which strictly follow both constraint like different budget and staffing levels, and still generates efficient frontiers and saves files.

Currently, Agylytyx Optimizer only takes 1 constrain for its alpha test. In fact, several users have already commentated on need to build multi-constraint capability into the product. "Real portfolio planning should take into consideration a lot of additional constraints reflecting real situation with portfolio,." remarked one of our alpha testers. As a consequence we have prioritized this requirement on the improvement list for our beta version.

You can check our progress on the Optimizer. It is free and easy to use. Please go to The Agylytyx Optimizer to check it out. The user-friendly instruction on the home page can guide you through the process. If you have any comments, feel free to send them to us at



Optimizing a Portfolio Using the Agylytyx Optimizer™

In a previous post series, we talked about corporate portfolio management under different scenarios. In this post we will introduce Agylytyx Optimizer, a tool we developed especially for an environment in which multiple projects are managed. blog126aThe Optimizer helps users with project selection when they have limited resources. For example, when budgets are limited, management teams are required to spend them in the most efficient way. A classic analogy for this scenario is what economists call the “Knapsack Problem”. In other words, how can you get the most “bang for the buck”.

Let’s take a deeper look at the knapsack problem. As described in the above picture, assume you are preparing for an emergency evacuation of your home, you can only take one knapsack with you, and you have to choose the most valuable items you have to bring. Your knapsack has a limited capacity of 400 oz and you are trying to pack all your most valuable items in it. It is impossible to pack all of them since the total weight will be 613 oz. Given the weight of each item and its value, you must quickly decide whether it should be taken or not. The natural human tendency would be to start with the most valuable items and fill our knapsack with them until you ran out of room. Because more valuable items may also be heavier, this approach is not necessarily the best one. This is a classic illustration of the knapsack problem.

This problem can also be applied to a company’s annual budgeting exercise. Consider that projects are the items to be selected and that their costs and benefits correspond with the weights and values of value and your budget level is the maximum weight (capacity) of the knapsack. In addition, since projects, like items of value, usually have binary choices (fund it or not, pack it or not), the classic “knapsack problem” applies.

The knapsack problem is easy to solve when the knapsack capacity is certain. In this case, the Microsoft Excel™ solver function can resolve it. Setting up goals and constraints, is all that is required to run solver and generate results. However, if your budget level is dynamic and you want to see what portfolios are the best (with the most benefit under different funding levels), no Microsoft Excel function, including solver, will create an optimate solution primarily because solver only calculates one set of solutions at a time. In cases where the budget level is dynamic and the user desires sensitivity and scenario analysis, using Microsoft Excel becomes problematic: the user must run Solver repeatedly for different budget levels documenting each funding level result carefully.

Enter the Agylytyx Optimizer™. When users have different budget levels, they simply input them with the cost and benefit of each project. The Optimizer returns the portfolio (combination of projects) with the highest benefit at each budget level. This means by clicking the button, users will get all results under different budget levels all together in a single file. What’s more, the Optimizer renders a chart showing each portfolio’s cost and benefit. blog126bThe dots line a curve known as the “efficient frontier” on which each point is your best project bundle under that budget level.

The Optimizer is easy to use. It’s a simple three-step process: 1) download the template excel file we provide, 2) filling the template file with projects’ costs and benefits, and 3) uploading and optimizing it. Imagine you have 100 budget levels, you definitely do not want to spend a whole day solving them in Excel one by one. The Optimizer generates results for hundreds of budget levels in seconds.

We wrapped up the Optimizer as a website application where users can create their own accounts and manage their uploaded and result files.

The Optimizer is now in alpha test. Please go to The Agylytyx Optimizer to check it out. The user-friendly instruction on the home page can guide you through the process. If you have any comments, feel free to send them to us at



Tracking Plan to Actual – Creating a Culture of Tracking

We have been using a construct we innovated called the Finance Led Process Lifecycle to look at the ways it can help create a process for Tracking of Plan to actual results. In last week’s blog post, we looked at the first quadrant of the two by two matrix in the Finance Led Process Lifecycle, formed when finance led processes are just getting designed within the team. We call the quadrant the Conception quadrant, and we looked last week at how it is important to establish a baseline (plan) during this quadrant.

blog125aLast week, we looked at how it wasn’t always easy to establish a baseline for all parts of a complex portfolio. Often, budgets and benefit (sales, bookings, revenue, gross margin, contribution margin, etc.) are established only for certain departments and often ignore “overlays” like channels of distribution or regions. We also looked at the reasons why it desirable to carefully “decompose” the plan by allocating it out to these different organizations.

This week, we are going to look at what happens when the Finance Led Process begins to be socialized externally, or what we call quadrant two in the Lifecyle – the Collaboration phase. This week, we are going to look at how the process for Tracking of Plan to Actual results get socialized publicly in the Collaboration phase and what it means to create (or deepen) a culture of Tracking of Plan to Actual results in a company.

blog125bFinance teams are surprised at what often happens when socializing this plan for the first time with all the business constituents in the company. As a rule, business leaders in the groups may not initially react well to the notion of accountability, simply because they may not have been measured against “hard targets” before. Some business leaders may react well to the notion of being measured, but they may change their mind as the fiscal year progresses and things aren’t looking so well for their measurements. We have seen business leaders reverse course on the value of Tracking of Plan to Actual results under these circumstances.

As we explained in our previous post, the more an allocated plan can be thought through, especially the rationale behind the allocation, the greater the allocation must be accepted. Especially when the plan is first socialized there is a lot of credibility and goodwill to be generated by a slight redesign to the baseline. Especially when business leaders make a strong argument which invalidates the rationale behind a baseline, the willingness to make a modification to the baseline will help create a culture of tracking by forging consensus and cooperation.

blog125cAs a plan makes a few minor revisions and proceeds through quadrant two to quadrant three, the baseline should be established and accepted. During the process of socializing the plan, the goal of doing so is not just to improve the plan, but to generate support for it.

It should go without saying that the purpose of creating such an exacting plan is to measure and track against it, so business leaders are accountable for their actions. Propagating the tracking to gain consensus in the execution phase can only be done if the collaboration phase is done well. Obtaining buy in about the baseline is the way to go about creating a culture of accountability in any organization.



Tracking Plan to Actual – Establishing a Baseline

Last week we started a series on Tracking of Plan to Actual results. We are using a construct Agylytyx invented called the “Finance Led Process Lifecycle” for analyzing the process of Tracking Plan to Actual results. We realized that many companies are new to this process and even those who were would likely be able to benefit from a methodical approach to thinking through this process. At a bare minimum, thinking about Tracking of Plan to actual results as a process is likely to create ideas about how that process can be improved.

blog124aWe quickly reviewed the Finance Led Process Lifecycle last week. We have applied that particular construct to Long Range Planning and analytics previously. In our last post, we talked though how the Finance Led Process Lifecycle would be applied to the process of Tracking Plan to Actual results.

In this post we will look specifically at the application of the first quadrant in the Finance Led Process Lifecycle to the Tracking of Plan to Actual results. The first quadrant in the lifecycle of any finance led process is what we call the “conception” stage. It is where any idea is formulated. At this point, the corporate involvement is low because only the finance team is involved.

When applied to the Tracking of Plan to Actual results, this idea phase become all about establishing a baseline. In principal this seems easy. In reality, this process can be quite difficult. As a general rule, the more complex the portfoio, the more difficult the task. This is because the actual commitments are generally done at a deparment level within functions. The more offerings, theaters, overlay organizations, channels of distribution, etc. the more difficult this task can become.

blog124bThis difficulty level only increases when a company is committed to doing the process of Tracking of Plan to Actual correctly. Allocating the budget requests and revenue projections to the various pieces of the portfolio who were not directly responsible for those requests and projections can be at best tedious and at worse very time consuming and inaccurate. We have seen decompositions of these budget requests and projection resemble the old game of Tetris – essentially trying to play with all the moving parts until they finally “come out even,” meaning they all tie out.

This may not be easy, but it is a one time process which is worth doing. To foster a sense of accountability across the organization, it is not good for some business leaders to be “on the hook” for budget and revenue quotas while other are not. All business leaders should have a stake in achieving budget and revenue goals collectively, even if they are “overlay” organizations. Tracking of Plan to actual results at each level of the portfolio requires that a good baseline (“plan”) be established and allocated out to all elements of the company.

In the first phase of the Finance Led Process Lifecycle, the finance group establishing a solid baseline is important. In our next post, we are going to look at exposing that plan to the rest of the company. In the first phase, it is vital to establish the baseline for the whole company, and to have the rationale for the way the plan has been decomposed. This will minimize the risk of needing to revise the plan in the second phase. Still, as we will cover in that post, some slight revision will be necessary. The more a finance group can elucidate the rationale for decomposing the plan the way it has, the easier this process will become.

If you would like to apply the the Finance Led Process Lifecycle in your organization or work though your Plan to Actual Tracking, Agylytyx can help. Contact us today for an assessment.



Tracking Plan to Actual – An Introduction

We are planning to start a new multipart series about how to measure and track. Many things may need to be measured and tracked in a company: budget requests, sales quotas, timelines, headcount, programs, projects, product offerings, and service offerings, just to name a few. There can also be a lot of ways to systemically deliver results: reports, scorecards, dashboards just to name a few. We are going to assume, for this series, a very common tracking function usually led by a finance team – the tracking of actual results against a plan.

blog123aThis whole process sounds straightforward at first glance. You simply “put a plan of record into a vault” and then “take it out” at regular intervals to obtain a comparison. At a high level, it really is that easy. However, as anyone who has been involved in the process of reporting actual results against a plan will tell you, it is never that simple. There are usually more moving parts to a plan than are originally memorialized in the plan. The explanation of the difference between the two is often elusive for that reason.

We are going to use the Finance Led Process lifecycle to think about how to improve our plan to actual tracking. The Lifecycle of any process led by finance teams goes through four distinct project phases, and Tracking the Plan to Actual Process is no different. We are going to spend each of the next blog posts explaining each one of these parts, then conclude with a look-back at the entire process.

By way of review, the Finance-Led Process Lifecycle divides any process which is led by finance teams into four distinct quadrants. These quadrants are formed by two axes, one axis illustrates the degree of completion of the process, and the other axis illustrates the degree of involvement by other groups within the company.

blog123bThe diagram shown here illustrates those four phases. The diagram also shows the names of each of the four quadrants. Finally, the model also superimposes an arrow which shows the path that each Finance-Led-Process (in this case the Tracking of Plan to Actual).

In our next post, we will talk about the first phase or quadrant, formed by the intersection of a relatively new process and one which is also not very complete. In fact, this stage labelled “conception” we will talk about how, despite this part of the process seeming “easy”, it is in fact very important to be deliberate in the process to make future parts of the process much more “easy.” Establishing a tracking mechanism which will be proposed to a wider audience is the point of this phase.

As the process of Tracking Plan to Actual becomes well-defined within the finance group and agreed upon, it is time to move into the second or “Collaboration” phase of the process. The process of Tracking Plan to Actual is defined enough to be shared, so it is high on the corporate involvement access but it can and should be refined in this process to help other groups “become vested” in the process. This is an important step, especially for companies who do not yet have a strong culture of accountability. In this case, socializing the process of Tracking Plan to Actual is also about “propagating the mindset.”

Next, we will describe the process of executing the process of Tracking Plan to Actual. If the process has been done correctly in the first two phases, this third Consensus phase of high corporate wide involvement means driving “Consensus” to do root cause analysis or “find the explanations.” We have seen many companies call this bridging the gap or explaining the difference. This explanation is critical as we move into the final phase.

In the “Coordination” phase (which in this case we may as well call the “Communication” phase) the finance team will be responsible for reporting results in terms of Plan to Actual to executives. Most impressive are the presentations which include a “bridge” explaining why and differences between the Plan and the Actuals. These explanations can only come if the mechanism has been set, a culture of measurement has been established, and true partnerships with the business have been created.

We look forward to looking at each of these phases for Tracking Plan to Actual in detail. We hope you will come on the journey with us and see how your team can become experts in Tracking Plan to Actual too.



Portfolio Management Summarized

This post will conclude our series on Portfolio Management. The topics we have covered in great detail include the Portfolio Management of Risk, the Portfolio Management of Non-Revenue Generating Initiatives, and the Portfolio Management of Business Units. Portfolio Management of Distribution Channels. blog122aLast week we looked at the final specific piece of the puzzle when we looked at the Portfolio Management of Corporate Goals. We noted in that post that these questions could become very complex, very quickly.

In fact, that is true of any of these topics, or combinations of them. When questions like products and regions are factored in, there are lots of possible combinations. For example, a business team may seek to answer the question: which products are the riskiest when distributed by Business Unit X through the reseller channel in Europe? A Portfolio Management team might want to know where the exposure lies and in turn will depend on the data to provide the answer as to whether there is a combination that results in the greatest spike in “risk.”

Portfolio management is not about providing the answer to a specific and pointed question. True portfolio management is about knowing which question to ask and how to explore data. When an executive (say a CFO) seeks to know the answer to a general question like “why are my margins eroding” a typical data exploration approach (be it a pivot table or an engine like Tableau or Qlikview offers) will be like looking for a needle in a proverbial haystack, blog122bchanging multiple portfolio management inputs like the one above and then hoping to visually stumble upon the answer.

Agylytyx consulting services and products specialize in simplifying this complexity. The Agylytyx Generator is built with Portfolio Management in mind. We know, for example, that the answer to the question above is usually multiple factors and that each factor needs to be quantified. Further, revenue trends and margin trends deserve to be considered. To be able to visually inspect hundreds of charts from all the different combinations within seconds dramatically improves Agylytyx’s ability to offer the answers immediately.

The ability to assess the impact of various funding decisions on Portfolio Management is a specialty of the Agylytyx Generator product. Contact us today for more information on how Agylytyx management can help with Portfolio Management.



Portfolio Management of Corporate Goals

This will be the next to last post of our series on Portfolio Management. We have been looking at different approaches to Portfolio Management to various aspects common to large companies. Last week we looked at the complicated question of the Portfolio Management of Distribution Channels and noted that this question could become very complex very quickly.

In many ways, the Portfolio Management of Corporate Goals is even more complicated. The approaches to the Portfolio Management of Corporate Goals are even more numerous. The good news is that, unlike the term “channels” there is very little confusion or ambiguity about the concept. Although different terms like strategies, objectives, aspirations, etc. may be used in different companies, the overall concept remains the same.

The concept of the Portfolio Management of Corporate Goals poses a set of challenges which are unique. No other aspect of the discipline of Portfolio Management is subject to same difficulties. The most common thing we see among clients is that this particular approach is not even attempted, is given some cursory “lip service,” or is subject to a type of “revisionist history” in which the operational budgets are explained in terms of strategic alignment.

The Portfolio Management of Corporate Goals is central to the success of the alignment of budgets with corporate strategies. This is not an easy task, but it is one worth doing. As we have covered in two previous series, the notion of aligning strategy and execution is central to the long-term success of a company. We even cited long term research on this topic from Mackenzie Research which noted that companies which are able to align their operational budgets and strategies become 40% more valuable in terms of return to shareholders over time than their competitors.

Among the few firms that attempt the Portfolio Management of Corporate Goals, we have noticed a common tendency to not get it right in the first year before asking for help. Most companies understand that the first step in this kind of endeavor is to create alignment between budget requests and the stated corporate goals.

blog121aThe best people to create that alignment are usually the business people making the budget requests. Unfortunately, they are often the people with the least idea of how to identify that alignment. In most egregious cases, we have observed business leaders who are used to “gaming behavior” in their budget requests try to “force” the alignment of their requests to the stated corporate goal that they deem most compelling. In the less egregious cases, we have observed business leaders taking wild guesses about how their requests line up with corporate goals. In these latter cases, business leaders are often well intentioned, but are not educated enough about corporate strategy to identify the alignment.

The mission to align budget requests with corporate goals is a prerequisite to the Portfolio Management of Corporate Goals. So how does one obtain reliable information aligning budget requests to corporate goals? Fortunately, there are some best practices which can help.

First, create a climate conducive to alignment of budget request to corporate goals. This is easier said than done, but explicit directions and transparency are two keys to achieve in this meeting. One critical way we help companies achieve this goal is by scheduling two meetings during the budget request process. All persons involved in the budget request process should be invite to these meetings Both meetings must be carefully orchestrated for maximum effect.

The first of these meetings is a called a Corporate Goal education process. This meeting involves the group (usually finance) coordinating budget request explaining the importance of aligning requests with corporate goals, how that will happen, and providing some specific examples. This meeting usually also involves the strategy team explaining what the corporate goals are and providing detail around them.

The second of these meetings is called a Strategy Sharing session. In this meeting, budget requesters explicitly review the way their various request align with corporate goals. Peer reviews of these requests and their goal assignment always results in an improved view.

A second best practice is to embrace and encourage partial alignments, but always make them total 100% alignment. We have seen too many companies lose their way on both “sides of spectrum.” When are too rigid, they insist on aligning all budget requests to one goal. In reality, that is not always realistic, some proposed and ongoing projects support more than one corporate goal.

blog121bOn the other hand, we have also seen companies be “too flexible” with their approach. In some extreme cases, we have seen a budget request aligned 100% with several corporate goals. This approach will make it impossible to effectively achieve the objective of the Portfolio Management of Corporate Goals.

Sometimes, one of the hardest things to do when requesting a budget is to decide which corporate goals that budget request actually supports, and in what quantity. Still, the clarity that results from this thought process is important to foster. Indeed, a company’s ability to link its strategy with its execution hangs in the balance. If this means creating a mini-business plan for the budget request, so be it. It is important to enforce the alignment of the budget request with explicit corporate goals, and to make sure the requested budget amounts can be mapped to corporate goals in an allocation that equals 100% of the amount requested, and no more.

A third best practice is to assist business leaders in the alignment of their budget requests with corporate goals. In addition to the meetings described above, we commonly see the community of the group which administers the budget process (usually finance) providing a dedicated resource with expertise on the alignment process to the business leaders requesting the budgets. These resources provide whatever support is necessary, from advising the business leaders on the process, to preparing business cases and models, to transmitting the various budget requests.

The job of the Portfolio Management of Corporate Goals really starts when all the information has been received by the group administering the budget process. At this point the analysis of the various budget requests and their impact on the goals of the corporate must be assessed. This is never an easy process, but we have seen it go well. Fortunately, there are some best practices we can summarize.

First, it is important not to be too obsessed with getting it right, especially the first time around. While it is important to achieve valid results, if the information has been obtained relatively accurately, there is every reason to believe that the output will be valid for decision-making. We have seen companies let the “perfect become the enemy of the good” effectively going to information overload mode. We have even seen companies where executives express a lack of confidence in the results because they have heard that the process was uncertain. It is important to communicate expectations appropriately from all at the outset – that the objective of this process is to achieve results which are directionally accurate for decision making purposes.

Second, it is important to understand the complexity involved. The budget requests will almost certainly involve sets of products, regions of the world, channels of distribution, etc. Understanding the budget request alignment to Corporate Goals will ultimately help decipher the way various products and services support Corporate Goals. It will help decide how the various regions play in the support of various Corporate Goals. It will help decide which channels of distribution effect various Corporate Goals. It will help understand how various Business Units align with Corporate Goals. All the various permutations will help analyze the impact of various funding scenarios on Corporate Goals.

Agylytyx consulting services and products specialize in simplifying this complexity. The Agylytyx Generator is built with this complexity in mind. The ability to assess the impact of various funding decisions on Corporate Goals is a specialty of the Agylytyx Generator product. Contact us today for more information on how Agylytyx management can help with Portfolio Management of Corporate Goals.



Portfolio Management of Distribution Channels

We are in the middle of a series which highlights some of the portfolio management tactics commonly used in firms to align various components of their portfolio, using during the annual budgeting process. We have looked at the Portfolio Management of Risk, the Portfolio Management of Non-Revenue Generating Initiatives, and the Portfolio Management of Business Units. In this post, we will look at some common approach to the Portfolio Management of Channels.

blog120aThe Portfolio Management of Channels is tricky because the term means many things to many different companies. We have seen the term used synonymously with what we would call the “Supply Chain” – essentially inputs to a manufacturing or resale process. Many companies have both a Supply Chain and distribution mechanism. In these cases, we have seen companies refer the entire process as their “channels.”

While we have found it expedient to use the terminology of the company with whom we are engaged, we have also found it productive to explain the semantic shorthand we use in our writings. “Your mileage may vary,” meaning the term may not always be used in the same way at your company. When we say “Channels” we typically mean “Channels of Distribution” (as the title of this post implies). Many companies call this “routes to market.” Explicitly excluded from the scope of this post is what we call “Supply Chain” - provisions to manufacturing inputs.

Many of our clients have considerable complexity in their channels of distributions. For example, one channel may supply many other channels, some of whom also purchase directly from our clients, and request our clients to mediate this “channel conflict” particularly when price and margin are involved. Because these ultimate “routes to market” may end up with dozens of permutations, it is even more important then, to master the art of Portfolio Management when it comes to analyzing Channels of Distribution.

Different companies use different terms when it comes to describing their channels of distribution also. Some of the terms we have seen include: “resellers”, “value added resellers (VARs)”, “distributors”, “master distributors”, “wholesalers”, “retailers”, “intermediaries”, “brokers”, “pass-through distributors”, “authorized resellers”, “certified resellers”, “channel partners”, and many more. When it comes to the Portfolio Management of Distribution Channels, we have found that the naming conventions used do not matter much. What matter is the ability to elucidate how product and services flow through these various routes to market, and the subsequent impact on the financials.

Portfolio Management of Distribution Channels requires an understanding of the financial impact of the various “routes to market.” Specifically, understanding the sales volume through the various first level partnerships is NOT enough. Understanding how the products and services are delivered to customers through these partnerships, and in what volume, is crucial.

Understanding the various margin impacts is often even more crucial. We have run into many situations where a particular “partner” was being “protected” due to their volume of sales by their discount structure and their deals desk appeals, only to find out that our client was actually losing money (negative margin) on average. Effective Portfolio Management of Distribution Channels starts with a true assessment of the specific financial impact from the various routes to market.

blog120bThis assessment can be problematic. In a situation where data is readily available, the assessments may vary wildly between products and even regions of the world. We have seen situations where a particular partner may not be profitable with one set of products in one part of the world, but the same partner is doing quite well with other products in another part of the world. For large companies, this is why understanding the financial impact of a partner ecosystem globally is vital to effective Portfolio Management of Distribution Channels.

Native analysis of the Portfolio Management of Distribution Channels is not easy. If the impacts in terms of products and regions are well understood, a particular form analysis like a “pivot table” is problematic. Even a Power BI or analytic engine like Qlikview or Tableau will not allow you to compare data discoveries. When it comes to the Portfolio Management of Distribution Channels the ability to save and compare results in various regions and various product combinations is vital.

Only the Agylytyx Generator has this capability. When it comes to assessing financial impact of the impact of various routes to market in different regions and different product sets, there is no comparison with other approaches. For true Portfolio Management of Distribution Channels, only the Agylytyx Generator will do the trick.

To truly leverage our experience with Distribution Channels contact us today.



Portfolio Management of Business Units

In our last post, we covered the ways various companies conducted the budgeting for non-revenue generating initiatives. In doing so, we picked up a topic that we had been covering several months ago – approaches to portfolio management. After introducing the topic, we covered the portfolio management of risk. We then interrupted the series to do a special series on linking finance and strategy. After picking up the portfolio management topic last week, we exhaustively covered the topic of the portfolio management of non-revenue generating initiatives. This week, the topic of the portfolio management of business units is comparatively easier than our last two topics. As such, we will endeavor to make this post shorter.

blog119aIn this post, rather than focus on aspects of business unit management which most companies, have mastered, we are going to focus on overlapping budget requests between business units and other elements of the business. We have found this to be the most vexing problem generally facing companies with business units and a degree of complexity in the portfolio.

In theory portfolio management for business units should be a simple exercise. Business units should be discrete in their business requests. They should have their own projects, their own departments, and their own resources. Their budget requests should be straightforward to document.

There are often complicating factors. Many companies have “overlays” like regions, segments or functions (or some combination of the three) which often span business units but have their own business leaders and budget requests. Even when there is not an outright “overlay” there are often projects which span segments. Many of these are non-revenue generating (overhead) projects – for discussion of these types of projects, you may also want to consult our last blog post.

For companies which face the task of budgeting for business units in this atmosphere of complexity, there are a few common problems these companies face. Overlapping budget requests, either by the overlays or by the segments themselves (or both) can result in double counting resources and projections. Even the most well-designed budget or planning process can catch these, but even then, it is often not until late in the process.

Fortunately, there are a few best practices for portfolio management of business units which can head these off. The first involves careful template design. The second involves sharing sessions. The third involves calibration sessions. The fourth involves successful measurement and tracking at the business unit level. All the first three should be built into any long-range planning or budgeting process. The fourth one should seem obvious, but it begins with a discrete identification of native business unit returns.

Template design should always be carefully done, especially when it comes to budget processes. When it comes to the treatment of overlays, a template which contains special instructions for identifying these types of initiatives is vital. The odds that both the business unit and the overlays will not follow instructions and identify overlapping initiatives are slim. Even if one requester makes a mistake, the overlapping initiatives should still be identifiable.

blog119bSharing sessions have a specific purpose. The purpose of these session is to review all the budget requests from the various parties. Usually these sessions involve presentations from the various requesters, ideally identifying each of the budget request items and describing them briefly. Because names and descriptions, may vary, it is important for each of the other requesters to pay close attention in these sessions, asking questions where necessary. These are the sessions where overlapping initiatives should be identified if they haven’t been caught already.

Calibration sessions are more specific. These are a great way to quantify overlapping budget request. These sessions are designed to make sure everyone is using the same unit of measure for projects and the same scale for similar budget requests. These sessions are designed to ensure, for example, that expenses like licenses and travel are treated the same way, and that resources like research and development or sales and marketing are at a similar involvement and request level for similar projects. In this way, overlapping budget requests, where they do exist, should be equal from both the business unit(s) involved as we as the overlapping party.

The final element of the portfolio management process in the management of business units is measurement and tracking. Most companies recognize the need to track common performance metrics at the business unit level. Ordinarily, the decision makers in charge of various business units will have their own tracking system in place, and respond quite well to corporate portfolio management’s tracking efforts; they help them run their businesses.

What often gets lost in the shuffle of measurement and tracking is the overlay initiatives. These are frequently discreetly measured at the overlay level (again be they segments, functions, etc.), leaving the segments which contribute to their success unaccountable. The resources which were discretely identified within a business unit which was to be contributed to the overlay needs to be tracked as well.

There is a reason why the business unit should have been asked in the budget request template to “flag” such overlay items – because in some way their commitment to this overlay creates a dependency on which the overlays rely. If unmeasured, we have observed a tendency to “absorb” these resources into other projects with the business unit, effectively preventing the overlay from achieving its objective. We have even observed business unit owners who had been guilty of this behavior complain because the overlay was not providing the business unit with the proper support.

This situation is the reason that measurement of a business unit’s overlay projects budget is critical. It starts with our first point - effective template design. The point of identifying overlay budget requests with such specificity is to setup the measurement and tracking capability. All steps mentioned here are vital for effectively portfolio management of business units. As we mentioned previously, most companies have most of the portfolio management issues worked out when it comes to business units. The few issues which we have observed to exist commonly are ones that can derail an otherwise effective strategy when it comes to portfolio management of business units.

Agylytyx has a lot of experience in planning, budget formulation, and portfolio management. Because many companies are amid the annual budgeting, we can assign a consultant to help for an hour by phone. This will not include a sales pitch or obligation. They will listen and react. Contact us to learn more.



Portfolio Management of Non-Revenue Generating Initiatives

A few weeks back, we started a series on Portfolio Management. After our introductory post, we wrote about the Portfolio Management of Risk. We then took a break to write a series about translating finance and strategy. In that series of nine posts, we explored the way the Agylytyx Generator was built especially for that purpose. Now, we return to the topic of Portfolio Management.

blog118aIn this post, we focus on topic of importance to many people – the treatment of non-revenue generating items within a corporate portfolio. Every company has them - projects everyone knows you must do in order to “keep the lights on,” to increase company efficiency, to facilitate other revenue generating projects, or even long term “bets.” The common thread for all these projects (programs, or whatever your unit of measure may be) is that they have costs associated with them, but no immediate and tangible benefits.

There are many approaches to handling these non-revenue generating initiatives within a planning process. We have seen all the following approaches work and not work in varying degrees. Some companies define the NPV by identifying cost savings for projects (or projected long-term revenues for the “bets”). Then they treat all projects the same way, analyzing their revenue generating ones with NPV also. Some companies effectively create a separate “pot” from which to invest in non-revenue generating projects. Some companies consider these projects overhead, and allocate them within their respective class of projects. Still other companies identify dependencies that exist, and effectively “link” these initiatives to the revenue generating items they support.

There is no “right” approach to this issue. Chances are, your company uses one of these approaches and may be considering a different approach. This post will not recommend one approach versus another. We will, however, identify the advantages and disadvantages of the various approaches, along with some suggested mitigation strategies for dealing with the disadvantages of each approach.

First, we will consider the NPV approach. The most commonly cited advantage for this approach is the “level playing field” it creates in the minds of decision makers. All projects, both revenue generating and non-revenue generating, are treated equally under this approach. There are two major drawback to this approach, and unfortunately, they feed on each other.

The first is the gaming behavior the approach encourages. While inflated forecasts are nothing new, most companies have figured out how to deal with the approach for revenue generating initiatives. Few companies are adept at managing the cost savings estimates of non-revenue generating programs, so they are often the worst NPV inflation- offenders. Calibration sessions are a great mitigating strategy – implementing a step in the planning process in which all non-revenue generating estimators are in the same room and are required to them use the same assumptions.

A second problem we have seen is that non-revenue generating programs are not often measured with the same rigor as we use with revenue generating initiatives. This makes the gaming problem worse, since it is a lot easier to inflate your forecast if you aren’t going to be measured on it anyway. Mitigation for this effort will take some time, since enforcing measurement takes a while to take effect within a culture, but still, it can happen within a year. To effectively measure the NPV forecast of a non-revenue generating initiative, it is a good idea to 1) make the forecaster be as specific as possible, 2) require the forecaster to carefully document the assumptions behind the NPV calculation, and then 3) put that plan “in a vault” and come out to measure that periodically.

Next, we will consider the separate “pot” or “investment bucket” approach. This approach maintains a separate investment account for non-revenue generating initiatives from revenue generating ones. This approach has the advantages of ensuring a company sets aside an allocated budget amount for non-revenue generating initiatives rather than having them compete for program dollars with their revenue generating brethren.

There is at least one major pitfall to look out for when following this approach: discrimination against that bucket. Often, companies have the tendency to treat this bucket as an afterthought, essentially giving non-revenue generating approaches the “leftovers” in the budget process. One way to protect against this pitfall is by including these programs in the planning/budgeting process, communicating along the way that these programs will be treated the same way revenue generating projects will be – that they are also vital to the company and will be tracked and measured also.

blog118aA third approach to the treatment of non-revenue generating initiatives involves allocating their costs across the various initiatives, essentially treating them as overhead. At least two major pitfalls exist when employing this approach – it takes concerted effort to overcome one of these two potential pitfalls, but it can be done, and the rewards from using this approach are considerable if a company has the stomach to put effective mitigation strategies in place.

The first major pitfall companies who employ this method for accounting for their non-revenue generating initiatives tend to fall into is the inclination to “peanut butter” spread the cost allocation for these programs across initiatives. The same allocation rules which apply to income taxes – there is the “flat tax” theory which allocates the same amount of expense across all programs, and the “progressive tax” theory which allocates the amount of expense proportionately to the size of the program. Neither approach is desirable 100% of the time – the reality is that many programs are a lot more resource intensive than other ones are, and the times this method is used successfully companies decide at the outset of their planning process rules for determining which programs will receive greater allocations and which ones will receive lesser ones. By determining these rules early one, companies can feel confident and be objective in the way the allocations are determined. Believe it or not, this is the easier of the two potential pitfalls to mitigate.

The second major pitfall to this approach is somewhat related to the previous pitfall, but much more insidious. This pitfall, endemic to many companies, involves apply expense to initiatives which don’t in fact incur the investment at all. This is most common with long term bets unique to a business division, region (often of the world in global companies), or a segment of the business. To avoid placing an allocation (no matter how small) involves identifying and isolating the cost for these initiatives to other initiatives in the same class (business unit, region, segment, etc.). This sounds relatively easy to achieve – our experience is that it is not. Effective mitigation of this potential pitfall if successful, comes close to a fourth and final approach to the treatment of non-revenue generating initiatives.

That fourth approach we have commonly seen is to go so far as to identify project interdependencies, even with revenue generating initiatives. This approach is by far the most detailed, time consuming, and potentially accurate. We have seen this used only in very large corporations. This approach has the advantage of accurately assessing the budget requirements at the department level. There are a couple of major potential pitfalls to watch out for when employing this method, and mitigation requires careful planning.

First, the complexity of the task overwhelms many budgeting processes. Under this approach, especially in a large company environment, budget requests which account for non-revenue generating projects together with revenue generating ones, can get very complex, very quickly. We have seen this happen quite frequently, especially with projects like “big bets” which can span budget requestors (and end up on multiple budget requests. To mitigate the risk of falling into that potential pitfall, companies using this approach this approach must be very meticulous in their template design (or whatever budget request format they use), so that they can accurately identify the percentage of resources being allocated to various projects. Communication between budget authorities can even break down over email or conference calls – therefore official budget sponsored “sharing sessions” are also helpful as a mitigation.

Second, it becomes much more difficult to measure and track the progress of initiatives under this approach, for obvious reasons. For this reason, it is very important that budget requesters, at the time of their request, submit measurable, time defined, and specific ways (hopefully quantitative) for measuring each one of their budget requests. To the extent that non-revenue generating initiatives are represented in the several of their requests and therefore are not specifically identifiable (“enablers”), the budget requester should be encouraged to note that fact and propose measurements for this as well.

The approach to including non-revenue generating initiatives in a portfolio is an issue all companies face. The collective consulting experience of our management team can help choose the best approach and help mitigate potential pitfalls in the area. For assistance with planning and budgeting needs, contact us.



Translating Finance to Strategy – Using the Agylytyx Approach

In our last post, we looked at what kind of environment would be conducive for finance teams to effectively “play point” for the companies by express their budgets in terms of strategy and bounding strategic formulation to the things that are actually possible to achieve. In this post, we are going to look at a solution specifically designed to help corporate finance departments achieve that task.

The Agylytyx Generator is like a bidirectional translator between strategy and finance. Strategy organizations and other business constituents use their input to create Frameworks. Finance groups populate the Datasets. The combination of the two is strategy visualized.

This approach creates customized presentation support for any business constituent using data from plan, actual, forecast, scenarios, or long-range plans. This same data is viewed different ways by different audiences. For example, a channel executive and a business unit general manager usually have very different sets of strategic concerns. Because of their different perspectives, they will look at the same data set differently. By applying custom-built strategic Frameworks to any set of data, an Agylytyx approach quickly and easily builds decision-ready analytic output.

When FP&A organizations embrace the Agylytyx approach, they inherently become more strategic in their approaches to reporting, variance explanation, guidance, and planning. McKinsey research quantified the impact of the strategy – execution gap. The Agylytyx approach was created to help companies close this gap.

Embracing the Agylytyx approach does not require FP&A to do anything differently than it currently does. Providing better support to business constituents by integrating the Agylytyx methodology on top of existing FP&A processes helps in two important ways.

For example, A well-established software company had multiple product lines, multiple business units, and did business in multiple parts of the world. The complexity in their business led them to embrace the Agylytyx approach. As Heidi Flaherty, Vice President of Finance and Investor Relations at Advent Software (NASD: ADVS) noted: “using the Agylytyx methodology last year helped us gain actionable insight into our strategic plan and helped us bridge our long-range plan with our operating budget.”

As the example above shows, the Agylytyx approach creates a tight linkage between FP&A activities and strategy. While it is valuable for the FP&A team to translate their results into strategy in order to close the execution gap, FP&A also has a key role to play in the development of strategy itself.

Those of us most familiar with the strategy-execution gap tend to put the emphasis on making budgets reflect strategy more. While this is undoubtable true, and probably responsible for most of the problem, there is another explanation which deserves consideration and emphasis. The truth is, reallocating budgets according to strategy is hard for a reason – sometimes strategies are so far afield from what is actually possible to achieve, that budgets simply cannot be altered to reflect strategy without a complete restructuring of a company.

In order to make actionable budgets, a company must have an actionable strategy: enter a strategic role for finance. As strategy departments go through their annual exercises of strategic planning, it is incumbent on Finance, Planning and Analysis departments to create actionable scenarios for the next year which create “boundaries” for strategic consideration. Strategy must be encouraged to think broadly about what is possible for a company to achieve. This type of visionary thinking is what drives successful companies to keep achieving extraordinary results. This type of visionary thinking is most effective when bounded by what is actually possible to achieve. To do otherwise risks broadening the strategy-execution gap, with negative consequences to shareholders and all other stakeholders.



Translating Finance to Strategy – Transforming FP&A

In our last post, we looked at some of the challenges that finance teams often face, and we looked at some specific reasons why these might contribute to an inability to link strategy with budgets. In this post, we are going to start to look at solution environment – one which is conducive to FP&A departments linking strategy with budgets.

The goal of this approach is to harness the power of existing FP&A process described above. This approach can tighten the link between FP&A and corporate strategy, closing the strategy-execution gap which plagues most large companies.

This approach helps FP&A contribute to an enterprise’s strategy beyond just numbers. blog116By integrating with existing business applications this approach continuously translates traditional financial metrics into the language of business strategy. Finance teams should express plans, targets, actuals, forecasts, or scenarios in strategic perspectives any business leader should appreciate.

Using this approach, finance teams can produce strategic presentations specifically tailored for various unique executive audiences, including: Board of Directors, CEO, CFO, SVP Strategy, Business Unit GM’s, Region VP, Channel Executives, or Product Line Managers. FP&A teams do what they would do anyway, but the Agylytyx approach can help transform these roles. For example, Reporting and Variance Explanation involves automating the explanation of the bridge between plan and actual using strategic terms.

This approach also makes it very easy to use Modeling and Scenario Evaluation to assess the strategic implications of scenarios under consideration. It also makes forecasts strategic by enabling users to visualize strategic implications of forecasts. Because of its continuous translation effect, the Funding Profiles approach helps align execution with strategy.

In our next post, we will look at a specific mechanism for addressing this gap.



Translating Finance to Strategy – Common Challenges to FP&A

In our last post, we looked at some of the key concepts that finance is often tasked with executing, and we looked at some specific reasons why they are blamed for the existence of a strategy-execution gap in many companies. In this post, we are going to look very specifically at some of the typical task of a corporate finance department as a way of supporting the claims we made in our last post. These are the most common things we see a corporate finance team doing, and why they are often faulted for strategy-execution gaps in their firms.

Quite often, FP&A spends most of their time on the F&P portion of their job, and very little on the A. In fact, the time FP&A usually spends on the analysis is often counterproductive. Very smart people with excellent analytical potential spend too much time consolidating data and generating reports.

Often companies get carried away with data. These companies spend a lot of time thinking about how to quantify almost all aspects of their business. Everything from corporate goals to department culture can and have been translated into numerical values. Many companies use numerical scoring guides as a substitute for difficult qualitative discussions. Still others find that they ask for more data than they can possibly produce or sift through. In these cases, critical resources may be filling out forms or templates at the expense of their business productivity. Still other important analytical resources are spending most of their time accumulating and sorting data, and insufficient time actually analyzing the data for results. Ultimately, when data is accumulated and sorted, decision makers in these environments typically find themselves in an information overload situation – there simply are too many numbers for them to make a real business decision.

Other companies seem to be in the opposite situation. These companies put the proverbial cart before the horse when it comes to their planning processes. Rather than using FP&A to solicit input from business leaders, these companies miss a strategic opportunity by providing specific financial guidelines to the business leaders in order to expedite the planning process. Because these companies tend to “play it safe” by keeping business leaders on a tight leash, they rarely rebalance investments across business units. For this reason, these companies have portfolios that tend to be fairly static. Since most business leaders will choose to spend their budgets on “keep the lights on” type of activities, often these companies will have rather low “innovation” tendencies. The result is that these companies will often fall behind their competitors. This is especially problematic in very competitive marketplaces. These companies also foster a business climate which rewards those who do not take risks because they become complacent in their “business as usual” approach. In the long term, these types of companies will experience deteriorating business results for reasons that are usually difficult for FP&A to trace.

Both extremes described above are common. Most companies seem to have gravitated close to one extreme or the other. Both marginalize the potentially strategic impact of FP&A. There is another way for FP&A to impact strategy. This technique will usually make FP&A more popular in an organization while having a positive strategy effect. Best of all, this approach is additive – it saves FP&A time while in the process of making a positive strategy impact.

In our next post, we will look at how it is possible to actually transform FP&A to be more strategically expressive.



Translating Finance to Strategy – A Year in the Life of FP&A

In our last post, we looked at some of the key concepts that finance is often tasked with executing, and we looked at some specific reasons why they are blamed for the existence of a strategy-execution gap in many companies. In this post, we are going to look very specifically at some of the typical task of a corporate finance department as a way of supporting the claims we made in our last post. These are the most common things we see a corporate finance team doing, and why they are often faulted for strategy-execution gaps in their firms.

There are common responsibilities which typically fall into the domain of FP&A in every company. This section lays out those responsibilities, and talks about them in the context of the larger role that they play in an organization. The section looks at finance activities, then planning activities, then analytical activities.

The inputs for consolidation differ widely from company to company. blog114Consolidation always involves some systems work – gathering data from various sources. Usually consolidation isn’t conducted by one individual; different people on a team may be responsible for consolidating different financial components like bookings, revenue, opex, etc.

One of the key outputs for consolidation is reporting, which happens quarterly. Reporting is an output function, and usually involves meticulous formatting of data. In the most automated cases, these reporting output functions are pre-formatted and creating desired reports for compliance purposes occurs with a few mouse-clicks. Reporting is always numerical, and rarely may involve a few graphic components as well. Reporting generally involves a comparison of actual results to a stated plan. Often there is a set of internal management reporting requirements which significantly exceed external/SEC reporting requirements. Usually, the larger the company, the more complicated the management structure, and the more intricate the reporting.

Typical planning activities involve budgeting and long-range planning.

Budgeting is where “the rubber meets the road.” Most large companies have departments (or their equivalent) which administer budgets. These entities live within business units, regions, functions, or some combination of these. Usually, formulating and handing out budgets is the responsibility of corporate FP&A. These budgets often have targets or plans associated with them. These budgets and their targets are usually set annually in a long-range planning process, and tweaked quarterly as results are processed.

Long-range planning (or long-term planning, or capital planning, or other terms) describes the process large companies use to create annual budgets. Most often this process is coordinated by finance. The process often requires detailed information from business owners. In these cases, it is finance’s responsibility to make sure that the required information is collected and consolidated in a timely manner.

Often using forecasts gathered during the long-range planning process provides finance with a unique perspective on potential future results. It is this privileged insight that is usually called upon to provide analytic insights. Common analytic roles for FP&A include modeling, scenario creation, and guidance.

Modeling and scenario creation are usually ad hoc requests made by the CEO, CTO, CSO, or some other influential business executive. Often the requests are amorphous, such as a request to model the potential impact on the business if two competitors were to merge, or if a recession struck an emerging market. Other times, the requests are quite specific, such as modeling the impact of spending 10% less in sales and marketing for the next 18 months. In both cases, finance is the organization usually viewed as best suited to provide insight to these scenarios.

In public companies, finance is usually called upon to provide guidance to the executive team. Even when (as is increasingly common), a company does not provide specific guidance on its earnings call, remarks by management usually portend future results. As the group best positioned to understand business dynamics, finance is usually obligated to share insights with the executive management team in the form of expected guidance.

Unfortunately, the collective obsession with numerical output in all of the above roles usually limits the true strategic value of corporate FP&A. While this result is unintended, it is almost always the case. There are many barriers to be considered.

In our next post, we will look at some of the most common challenges an FP&A Department typically faces when executing these tasks.



Translating Finance to Strategy – Finance at the Nexus

In our last post, we introduced the concept that finance is often blamed for the strategy-execution gap, and alluded to its basis cause – the inability to effectively translate between strategy and budgets. In this post, we are going to look more clearly at the way finance departments often sit at the intersection of budget and strategy. As such, finance departments are often blamed, accurately or inaccurately for this gap.

Finance is usually at the point of intersection of the strategy/execution gap. This happens because finance is responsible for administering budgets as well as planning them. In most companies, finance coordinates a long-range planning process designed to set budget for the next year. Finance also administers the budget throughout the year, recommending tweaks when necessary. At least on paper, this long-range planning process is sometimes associated with a parallel process of strategic planning.

The implication, often rendered explicit, is that finance is the organization responsible for coordination of a company’s resource allocation. Of course, it is up to the lines of business to execute. However, as the link between the lines of business and the strategic heart of the business, finance serves as an important line of communication.

Finance is responsible for issuing plans, tracking performance, explaining variances, creating forecasts and guidance, and developing recommendations. These responsibilities are critical to a company’s success – without these functions a company would function without any control point and would lose an important source of visibility. Still, because these activities are standard in finance organizations, they are often taken for granted. Even worse, they are often taken for granted and only recognized when a mistake is made in one of these areas.

Perhaps most impactful of all is the fact that most companies treat these activities as tactical and fail to capitalize on their strategic potential. The next two sections examine these activities in detail, and illustrates how finance organizations can make the same activities be perceived as strategic contributors and help companies close the strategy-execution gap.

In our next post, we will examine the specific reason for these factors by looking at the essential tasks of a corporate finance department.



Translating Finance to Strategy – Budgets usually reflect tactics not strategy

In our last post, we introduced the reason for the strategy-execution gap, and alluded to its basis cause – the inability to effectively translate between strategy and budgets. In this post, we are going to look more clearly at the budgetary reason for that gap.

This whole problem happens because the allocation of budgets in big companies can have a big impact. This process takes different forms in different companies. In some companies, budget administration annual budgets are created by executive decision makers and communicated to business leaders. In other companies, business leaders create budget requests and associated forecasts which serve as important context in how budgets are set. In most companies, regardless of which approach is used, budget allocations rarely change dramatically from one year to the next. Whether there is greater or lesser “affordability” from the previous year, the most common initial approach to a surplus or deficit is an equal spread among various business leaders.

blog112As a next step, companies often make minor adjustments in their allocation. In a typical situation, if a company’s leadership is feeling especially “strategic,” they may then reduce the allocation for some part of the business and increase the allocation for others. This minor reallocation is usually done based on instinct – business leaders usually know which parts of the business hold more promise and which ones are likely to underperform. Minor tweaks typically have a large impact on the business leaders’ organizations, so they tend to be small.

Larger budget shifts which are capable of keeping pace with a company’s evolving strategy are rare. The strategy-execution gap develops for this reason. Most companies make their largest budget evolutions in hard times when they are forced to make triage decisions. If they are able to successfully recover, the way that they allocate to the way it was before the crisis, because that is the only way a company knows how to budget normally.

There is a certain inertia at play here. Because companies have built structures that they perceive a need to support, these structures require a roughly equivalent budget allocation. Making major budget reallocations without a very specific plan can have unintended consequences in this environment. This happens because organizational structures which support strategic initiatives can be eroded, ultimately diminishing a company’s ability to execute against its strategy instead of supporting it. For this reason, it is rarely apparent HOW to go about major budget allocation shifts. Incidentally, this why the “zero-based-budgeting” concept works in theory but rarely in practice.

There is also a certain element of political resistance here. Especially in large companies, budgets tend to involve a certain amount of negotiations, so that budget allocations are typically influenced by a very broad executive audience. When it comes to sensitive budget discussions, the human self-preservation instinct usually kicks in. The squeaky wheel gets the greased for a reason. This collective instinct usually makes it difficult to execute major budget reallocations.

The fact is that it is difficult to identify proper budget reallocations, and even when a company can identify how to successfully reallocate, it generally lacks the political will to make the hard choices. It is little wonder that a gap develops between a company’s aspirations and its ability to achieve them. Unfortunately, the scapegoat for this dilemma is often finance.

In our next post, we will look more deeply at how a typical finance department is effected by all this.



Translating Finance to Strategy – The reason for the strategy-execution gap

In part one of this series, we introduced the strategy-execution gap, and alluded to its basis cause – the inability to effectively translate between strategy and budgets. In this post, we are going to look more clearly at the reasons for that gap.

There is a clear cause which needs to be addressed. Departments in large organizations need to receive budget allocations which show a company is serious about accomplishing its strategic objectives – that they are truly willing to “put their money where their strategy is.”

The root causes behind the inability to allocate budget according to strategy is easy to identify. Addressing it is not. Just like it is hard to turn a battleship, it is not easy to accomplish large budget reallocations in a big company environment. There are usually two important reasons. First, it is usually not easy to identify objective ways in which budgets would be allocated differently. Second, even if it were easy to identify, the political will to impose the reallocations often doesn’t exist within large corporate environments, i.e., inertia rules!

Sometimes companies are aware of this reason, and try to put stopgap measures in place. They are often acts of desperation which do not work, and can make the situation worse. For example, in one large high tech company, the areas of strategic emphasis for the coming year were clearly identified by corporate management. A board packet was created by the CEO which identified the company’s strategy. Badges were distributed to all employees with the corporate goals printed on them. A sophisticated system of cascading employee objectives linked corporate objectives to individual executive goals to manager goals to employee goals. In one encouraging step, long range planning scenarios were created which aligned annual budget allocation for the next year to these corporate goals. Ultimately, the budget allocations associated with these scenarios was rejected in favor of a much simpler allocation method – distributing budgets based on the previous year. The result was a crippling mass defection of some of the most qualified leaders and individual contributors.

This company found out that there is no substitute for creating operational budget scenarios which reflect corporate strategic goals. Often, corporate strategies are not quantifiable enough to be linked adequately with budgets. An attempt to “force” strategies to fit into budget choices will, by definition, be artificial. Yet many companies try this approach. This approach puts the proverbial cart before the horse – rather than having business leaders assign parts of the budget requests to corporate strategies, it is too easy and too tempting to try to take the “easy” approach to addressing the strategy-execution gap. These efforts many “mask” the real reason for the strategy-execution gap – the lack of an organic translator between budgets and strategy.

In our next post, we will look more deeply at how budgets in most companies are not strategic at all.



Translating Finance to Strategy – Strategic Emphasis

In our last post, we introduced the problem companies commonly face in attempting to align their budgets with their corporate strategies.In this post, we are going to look specifically at the importance most companies place on their strategic focus.

Many companies are great at setting strategies. These companies recognize that strategy is crucial to their survival and their board’s choices usually reflect that understanding.

Many top business schools and consulting firms cultivate strategic thinking. Top corporate executives, who often attended these schools and did stints at these consulting firms, are highly sought after for their prowess in formulating strategy. Most large companies have entire departments with a dedicated focus on strategy. These departments are usually staffed with individuals who attended these same business schools and worked in these same consulting organizations.

Creating strategy is usually more of an art than a science. Inputs to the strategic formulation are information-based. Some points of information are raw data points such as market definitions, TAM, forecasts, and business drivers. Some points of information are more subjective, such as competitor trends, M&A activity, and technology innovations. Everyone has a slightly different strategy formulation process and cadence, but companies use a “funnel” metaphor. In this metaphor, the beginning of the process starts with large scale strategic considerations, and the process narrows down as various strategies are considered until a strategy is formulated.

Communicating strategy is an art in and of itself. The outputs of this strategy formulation process are as important as the process itself. The strategy is communicated to and endorsed by various constituents in various ways – boards, corporate executives, managers, and employees. Most companies correctly place a high degree of importance on an effective communication of strategy.



Translating Finance to Strategy – an Introduction

One of the most common problems we see is large enterprises which fail to establish an effective link between their corporate strategy and their financial execution. Commonly known as the “strategy-execution gap” a lot of academic (and not so academic) speculation exists as to the root of the problem. For example, one of the wildest but most common speculations is a cultural mismatch between the strategic minded and the finance minded.

Our experience is that this gap develops from something quite simple: the lack of a company’s ability to successfully translate their budgets into their corporate strategies. For the companies who have let this problem fester, we see entire corporate strategies being formulated without a view for whether the budget options will actually allow any of those strategic goals to be enacted. We also see companies establishing budget scenarios without regard for their impact on corporate strategies.

Quantifying the effect of this gap is not easy. This gap is not something that matters a great deal in any one given year, but the cumulative effects which result is not addressing this problem can be devastating. The reason is not unlike a ship which goes slightly off course – it is not likely to matter much at first, but the longer it goes unaddressed the worse the problem becomes. One prominent research firm found that this problem is rampant in large enterprises, resulting in a 40% loss in shareholder return over time. That study methodology was long-term and retroactive, applying the lens of those who had allowed the gap to fester versus those who had either preempted it or addressed it early on.

Correcting a persistent problem like this one requires a concerted effort to overcome gap. Like the off-course ship in the example above, ultimately, a radical course correction will be necessary to address the issue. Companies that have put off solving the strategy-execution gap created when budgets are not aligned to strategies will find that it takes a more concerted effort to address the translation between the two.

Fortunately, there is a ready solution to translate between budgets and strategies. Any company can adopt this translator. To really solve the problem requires a lot more spending the money required for a translator. It requires configuration of the translator to speak the language that is unique to each company, and it requires the will and commitment to use the translator. The longer a company waits to translate between budgets and strategies, the longer it will take to address the problem once the solution has been implemented.

In this series, we are going to look at little more at the common problem of translating between finance and strategy, the impact of this problem, and look at solutions to the problem.



Portfolio Management of Risk

The concept of “risk” in portfolio management was likely also borrowed from the finance community. Just as the notion of a “portfolio” originated there (as we looked at in our last post), the notion of “risk” in a portfolio likely emerged in the same way. The notion of the “riskiness” of an individual investment in a financial context was likely best measured by the volatility of similar investments, also considering any change in circumstance. It is important to understand how the concept of “risk” evolved in that content in order to best employ the principal in corporate portfolio management.

With the advent of a “portfolio” of investments, the nature of “risk” evolved and concepts like “diversification” (specifically “diversification of risk”) were born. The “riskiness” of a portfolio of investments could not be characterized by the sum of the volatility of the investments in that portfolio. For example, if all investments in the portfolio were in the same sector, the “riskiness” of that set of investments could not be characterized simply by the projected volatility of investments in the sector. Rather, in this case the degree of “exposure” of the “portfolio” to “volatility” would be lessened by exposure to additional sectors (“diversification” or more specifically “diversification of risk”).

In order to formulate a diversification strategy which best met the client’s needs the concept of a “risk profile” was born. That notion of risk essentially amounted to having the client define their desired level of exposure to various risk. Usually, a portfolio manager would ask the client a series of question to assess their attitude toward risk in a variety of industries (for example, ranging from their views on gas prices, tariffs, interest rates, and a lot more) to create a “risk profile” for the client. Then, and only then, could they create a portfolio which best met the client’s “risk profile.”

The concept of creating a risk profile in corporate portfolio management is the same, if not a lot more complex. Complicating the task are two things: first, there are usually more than one decision-maker and second, there are a lot more potential complexities to analyze in terms of risk. For these two reasons, we usually recommend an iterative approach to the development of a risk profile designed to drive consensus.

There are different approaches to developing a risk profile in corporate portfolio management. We recommend a simple scale that tracks each risk factor for two elements: the risk of the event happening and the risk of the impact of the event itself. Both factors are typically ranked on a five-point scale.

It is important to obtain consensus on the likelihood and impact of a risk factor. For this reason, sharing the responses of various decision makers and soliciting additional risk factors, and driving everyone to a consensus on the resulting “corporate risk profile” is very important. The reason is that when expressing output of the budget process, it is possible to examine the impact of each allocation choice on the corporate risk profile. Crucially, this can be done with a level of objectivity.

Further, we recommend being as specific as possible when it comes to risk factor and their weighting. For example, saying “hurricane” is better than saying “weather” and saying, “tariff on cheese” is better than saying “protectionism.” When it comes to expressing risk factors both for likelihood and impact, the more linguistically specific a term is, the more likely a decision maker will be able to accurate express their views, and the more valid the outcome will be.

Each company has its own unique risk profile. Just a few of the risk factors we commonly see have do with a specific company’s exposure to:

Weather events

Target market demand

Capital cost volatility

Conversion Exchange (FX) rates

Fuel Costs

Tariff rates

Political instability

There are two advantages to consolidating the score for each risk factor to two vectors: event likelihood and event impact. First, obtaining consensus for consolidated risk scoring allows a sense of objectivity and confidence in the results. Second, consolidating the scores in this way makes the expression of the output much easier.

blog108aFor example, it may be desirable in the budget allocation process to present a consolidated view of riskiness which adds all the likelihoods and the impacts together, in order to plot the projected impact of a certain allocation of spending according to their level of riskiness.

Consider the graphic depicted here. Various spending allocations (the bubble size represents the actual amounts) are plotted with the impact and likelihood of risk. Assuming all combinations of allocations meet corporate affordability, we would recommend a strong look at Allocation 4, since it contains the impact of risk factors the most and is only slightly more likely to trigger risk factors than allocation 2. We say a strong look (as opposed to an outright recommendation), since the relative benefit of these allocations is NOT depicted here.

One of the advantages of having risk scores consolidated in this way is the ability to also consider the relative benefits of various allocations with respect to their risk. The prospective benefit to a company must also be considered. For example, the most ideal investment scenario from a cost/expense perspective may not return much benefit to the company. Consolidated risk scoring always presents the presenter an opportunity to present benefit from investment scenarios in the same light.

blog108bBenefits are expressed different ways in different companies. In some companies, it’s total contribution margin. Others measure topline growth in revenue and even bookings. In rare cases such as pharmaceutical companies or other companies incubating long term projects, it may be time to market. In these cases, slight differences in risk scores can have a big impact.

Consider the graphic depicted here. Here the time to market for each respective investment allocation (the same ones shown in the bubbles above) is depicted again the risk scores. Clearly, this output shows that the time to market impact of investment scenarios 1 and particularly 2, are significantly longer than investment allocation scenario 3, 4 and 5. The other obvious implication from this graphic is that the risk factors are relatively insignificant when the time to market vector is considered.

The fourth allocation scenario, based on the combination of the risk containment and time to market advantages is the one we are most comfortable recommending based on this output.

It is worth stressing that NONE of the output or recommendations would be possible here if the “heavy lifting” of developing risk factors and weighting levels, and gathering consensus around the same, is not done here. Proper portfolio management of risk is no easy, but if done correctly the outcomes can strongly effect the strategic direction of a company.

If you would like to discuss portfolio management, particularly risk, feel free to reach out to us.



Introducing Portfolio Management

We have been seeing a steady dilution of the term “portfolio management” these days. The term has become a politically expedient one to use in meetings as a rationale for taking many different actions. It seems like we can become immune to certain terms, and other jargon will come into vogue as an accepted term. After all, who would not want a “portfolio management” approach?

Where did the term come from? Most likely it was borrowed from the investment community, where the idea of a portfolio of investments came about. In this case, “portfolio” simply referred to collection of investments, either by an individual or entity. Often terms like a “balanced portfolio”, “portfolio returns” or “portfolio risk profile” were used to refer to the strategies involved in “managing a portfolio” of investments. Thus, the term “portfolio management” was born.

It wasn’t long until the concept of a “portfolio” and “portfolio management” was created in larger companies. In larger companies or companies with sufficient complexity, a portfolio management approach is appropriate and desirable. Conversely, there are certain situations where such a term might not seem appropriate. If a company is smaller, lack sufficient complexity, or has a cultural bias against anything that sounds academic, for example. In these situations, even broaching the term “portfolio management” may be political suicide.

Complexity is a key variable, and some companies will have sufficient complexity to warrant portfolio management, even if only one of following variables are included.

Where proxy measurements for complexity are things like the number of:

projects conducted

channels of distribution used

products and services offered

geographic regions defined

business units or divisions included

as a general rule, the more of these which are present, the more a company needs a portfolio he next post management approach.

While there are lots of portfolio management approaches, this series will focus on a couple of key concepts in portfolio management: risk and optimization. To some extent, these concepts will build on each other. Our next post will be about the concept of risk in a portfolio and how that can best be managed. Next, will build on that concept of risk, specifically by turning our attention to the optimization of affordable spend in the context of budget allocations and how to get the most “bang for the buck.”

The successful development of a risk profile and a technique for optimizing spend in that context are central to portfolio management. Finally, we will turn our attention to the communication of the output of this portfolio management approach. We strongly urge those who wish to distinguish themselves in the field of portfolio management pay close attention to this set of posts.



Zero Based Budgeting – A Contrarian Viewpoint

Part Two of Two

In the first part of this series we looked at the origin of the Zero-Based Budgeting concept, and how different people have very different views of what it means now. We also talked about our experience in seeing firms try this, and how it always reverted into an exercise in budget efficiency, which firms should always strive to achieve in budgeting anyway. blog106Finally, we mentioned how we planned to discuss the more efficient ways to achieve the same outcome of efficiency in the budgeting process.

Driving efficiency into budgeting starts with an understanding of the current level of “inertia” or the “burn rate” of various parts of the business. Rather than assuming one can zero them out, understanding how to optimize affordability within the context of various constraints is the challenge facing budgeteers. That basic efficiency technique is lost in real zero-based budget exercises. To simplify, one cannot get where one wants to go without a complete understanding of where one has been.

Any finance-led budgeting process is essentially an optimization of resources based on affordable spend. The optimization is most often based around some kind of desired risk profile or scenario. The constraints of that optimization may be built around a short-term profit maximization scenario, a balanced risk approach, or a long-term revenue achievement goal. These desired outcomes and goals are antithetical to a zero-based budget approach. Achieving the desired outcome may be achieved in spite of zero-based budgeting, but that outcome is never facilitated by zero-based budgeting.

Our next post is not technically part of this series on Zero-Based Budgeting, but it is a logical outgrowth of that. We will be starting a series on optimization techniques. We will look at those techniques as a true alternative to zero-based budgeting which will focus on the getting the most “bang for the buck” within the context of what a realistic budget looks like.



Zero Based Budgeting – A Contrarian Viewpoint

Part One of Two

Zero based budgeting is a concept that has received a lot of attention. There are books and articles written about it, consulting practices built around it, educational seminar sessions devoted to it, and social media discussion about it.blog105 A quick search on the internet will turn up pages and pages about it.

Zero based budgeting is in fact, difficult to define simply because there are completely contradictory ideas of what it is. Zero-based budgeting was original conceived of as what it sounds like – starting each budget cycle completely afresh – assuming zero budget for everything, then building a budget based on applying affordable dollars to the most deserving parts of the company accordingly. Those who viewed that as an impractical approach say that zero based budget really means cultivating a culture of efficiency in budgeting, as if somehow just using the term “zero based budget” was some kind of semantic magic bullet.

There are a few common criticisms of the zero-based budget approach. To be fair, we do not consider those criticisms to be exclusive to the zero-based budget approach. In fact, any finance-led process can fall victim to the same basic criticisms – such as claims that the zero-based budget technique is too resource intensive or cumbersome.

Our experience has been that firms which experiment with a zero-based budget approach find that the inertia built into OPEX in various parts of the business usually swamps efforts to “start budgets from scratch.” This is not to say that a “culture of efficiency” in budgeting should not be cultivated, but in our experience, there are much better ways to do this than to play around with a zero-based budgeting idea. In the concluding part of this series, we will introduce some of the concepts which can help with budget optimization, and why these are easier and faster than the whole notion of “zero based budgeting.”



Applying the Finance Led Process Lifecycle to LRP

A Recap

After a full series, which covered many of the primary pitfalls companies face in designing and executing their annual budget cycle (commonly called long range planning or LRP), we decided it would be helpful to plot those pitfalls according to when they usually occur. Our intent in doing so is to help determine the optimal remediation techniques and when to apply them. Our original series how to detect such a potential pitfall and how to apply solution to that challenge. It did not mention best practices for dealing with several potential pitfalls and when they were best applied simultaneously.

To design and execute and optimal annual budgeting process, we used a lifecycle methodology we developed especially for finance-led processes. We introduced the process we call the finance-lead lifecycle almost a year ago. Since then, we have applied it to several situations, both in public and in private engagements. This approach is ideal for plotting the potential pitfalls an LRP process might cover. Tracking an LRP process through the finance-led process lifecycle and applying each stage in that lifecycle provides us with an excellent opportunity to consciously avoid the potential pitfalls in planning.

Since the LRP process is most often a finance-led process, and since it is usually cross-company in nature, it tracks nicely according to the finance-lead process lifecycle. You will recall that methodology is a two by two matrix formed by the amount of corporate involvement (low to high on the vertical axis) and degree of process completion (low to high on the horizontal axis. Because most finance process start in Quadrant 1 (only or predominantly finance involvement with the process just being envisioned), proceed through Quadrant 2 as the process design is socialized and refined, moving onto Quadrant 3 when the process is executed across the company, then concludes in Quadrant 4 as finance renders the output of the process into a decision-ready form.

We call those stages the four C’s, and the LRP process lends itself well to these Quadrant names as it proceeds through the lifecycle. We call the first Quadrant the Conception phase. Even when the LRP team remains the same in a company, and the process remains largely unchanged, there is always a design process (even if that stage can be expedited by using previous material). The second Quadrant we call the Collaboration stage. That occurs when the finance team leading the process attempt to build consensus for the LRP process, usually by incorporating changes to the process suggested by business leaders. When the process is fully designed, and endorsed, the finance team leads the company through the actual process (the Consensus stage), obtaining the data as agreed to be provided. Finally, the finance team becomes insular again in the final Coordination stage, taking the collected data and making it decision-ready for leaders to make budget allocation choices.

We identified some unique problems associated with each stage in this process. In previous posts, we talked about how to identify and remediate each problem. In this series, we isolated the problems unique to each stage of the process. In this final post, we’ll focus on what should generally be done at each stage of the LRP design and execution process to be avoid the entire class of pitfalls associated with that stage.

During the Conception, process for example, ideas are great but they need to be bounded by what’s attainable. The key during this stage is guidance. Resolve not to be too detailed with guidance and make sure that this guidance is linked to corporate goals.

blog104During the Collaboration process, a team must know when to say “yes” and when to say “no” to modifications. The key word here is “calibration.” Making sure teams across the business will work together and provide their data in the same way is critical to avoiding endless planning cycles and disparate datasets ultimately of use to no one.

During the “Consensus” stage, a team must guide and support business leaders throughout the planning execution without seeming overbearing or obsequious. The key word here is “encouragement,” meaning that business leaders and their constituents must often be compliant with the process they helped design and sign off on during the collaboration stage. Of course, they need to operate within those parameters with as much leeway as the process allows. Gentle encouragement to proceed with latitude “within the rails” will help ensure success in the final stage.

Once all the data has been collected, the process of making the data actionable takes place in what we call he “Coordination” phase. It is important that teams provide the “right” picture of the data to executive decision makers which will help them make up their mind as to the correct annual budget allocation. The key word here is “objectivity.” Retaining an air of objectivity will be the key to making sure all the “pieces of the puzzle” fall into place for decision-makers, and that it happens in such a way that the correct decisions and the rationales for them become obvious to all.

We hope you have enjoyed our extensive series on Long Range Planning. We have been involved in many long range planning processes in many different industries. We are used to doing free assessments of the planning process – feel free to reach out to us for more information.



Applying the Finance Led Process Lifecycle to LRP

Quadrant 4

This post is the fourth one applying a methodology called the “Finance Led Process” Lifecycle to some of the potential challenges any corporate finance organization faces when leading an annual budget exercise, often called “Long Range Planning” or LRP. This is the final quadrant in our LRP lifecycle. We will probably publish one more blog post recapping the series before we move on to other topics.

In this blog post, we are going to focus in on potential pitfalls which can occur during the “Coordination” stage of the Lifecycle. The Coordination stage occurs in the annual budgeting process when all the relevant information has been collected by the business. It is at this point that the team coordinating the LRP process has two areas of responsibility: 1) coordinating with the Executive Team so that the team can decide which allocation they wish to embrace and 2) translating that decision into budgets and coordinating that budget detail to business leaders.

These two roles are never easy. Agylytyx has seen Coordination stage of LRP happen many times in large companies, and has never seen a time where the decisions are easy and the budget outcome is not controversial. We have given names to some of the thorniest pitfalls we have seen companies fall into during this final stage of coordinating the budget process. The good news is that there are some things that can be done in planning design which will inevitably make these roles much easier to execute when the process reaches its inevitable conclusion.

blog103We remind everyone of a statement we made in last week’s post: “not every potential pitfall in the LRP process can be effectively designed away, but a good LRP design will help head off many of the problems before they occur. At a minimum, an effective design can help a team recognize potential pitfalls as they start to occur so that they can take the necessary steps to remediate them. An effective LRP process will ensure that data is collected reliably, consistently, and in a timely fashion.”

Three of the four potential pitfalls are focused on the first role mentioned above: coordinating with the executive team to decipher what the potential allocations of their budgets are, and how they will affect outcomes. In our experience this is the most difficult part of the two roles. There is one pitfall associated with the second part of the role, communication, and it is an important one. Let’s look at these potential pitfalls through the lenses of the two basic roles in this Coordination stage.

Challenges Associated with Coordinating with the Executive Team

During the Coordination Stage of the Planning Process, communicating with the Executive Team to help decide about funds allocation can be tricky. There typically is so much data which has been collected from the business leaders many team conducting Long Range Planning exercises find themselves challenged with Information Overload. Agylytyx has seen this be a crippling, paralyzing problem – we have in some cases seen the problem spread to an executive team. In this case the executive team essentially threw out the data and made their own allocation of budget based on their instinct.

This is one of the easiest problems to design away. Agylytyx has helped many companies do it. It does involve some planning and consensus building up front. It involves obtaining agreement from the Executive Team that a certain set of “Constructs” be used to make the ultimate funding decision. One client even had a name for it, the “Funding Profile.” This essentially amounted to requirements for the data collection effort. Obtaining this agreement up front allowed the team to collect and populate the information it needed to “populate” this “Funding Profile.”

A second problem which frequently occurs in the process of communicating information to the executive team about planning data is one which we call Forecast Folly. This pitfall occurs when teams are overconfident with the projections they are making. As we often read in the footnotes of many documents (paraphrased for our purposes here) past performance may not be indicative of future results. Consequently, teams communicating the potential benefits achieved through various budget allocation choices must be very careful about what they represent.

This one is more difficult to solve than the Information Overload problem, but like Information Overload this problem can be addressed. There are ways to design the Long Range Planning process which can increase the confidence in results. One is through designed calibration (“Sharing Sessions”) which will at least ensure the forecasts are all done in the same way. Another is through comparing previous forecasts to actual results to know which groups are better at forecasting than other and calculating the appropriate “discount factor” to apply to specific forecasts.

A third pitfall common to the role of communicating with Executive Team is a pitfall we call Pragmatic Profiling. Pragmatic Profiling occurs when finance team attempt to “shove” project into a very few buckets, usually because the executive team has dictated they do so. Often this is a result of a “vacuum” which results when an Executive Team is plagued with the Information Overload problem above and attempts to dictate how things should be simplified. In this case, a common reaction by the Long Range Planning team is to force projects into one of the buckets defined by the Executive Team.

Whether it was the Executive Team’s idea or not, the potential pitfall typically results in a comparison of projects that should not be compared. The proverbial “apples or oranges” maxim applies here in a very real sense. If firms are attempting to “level the playing field” through an objective criterion (be they as compelling as EVA or NPV), they risk ignoring several key elements, like project interdependencies, length of forecast, and risk, to name a few. This problem rarely has a disastrous impact in a single year, but it is a subtle problem that will manifest itself over time as companies fail to reach their aspirations or flounder in the face of more nimble competitors.

Fortunately, there are ways to head this problem off in planning design. Ultimately some of these solutions will force a larger discussion about linking finance and strategy. If that occurs, it is a discussion worth having in that it is vital to the future of the company. From a tactical perspective, the means to head off the pragmatic profiling problem which can occur as a group is coordinating LRP results is vital.

A Challenge Associated with Coordinating Budget Results

The potential pitfall associated with the communication of budget externally has nothing to do with the communication process to the business leaders per se. As we mentioned previously, there is no situation we’ve seen where budgets are communicated and there is no controversy associated with that dissemination. There will always be “winners” and “losers” in the budgeting process. We have mentioned several points of buy-in along the way so that IF our recommendations are implemented and effective, it is appropriate at least business leaders will perceive the process to have been fairly run and budget criteria decided as advertised, so that there is at least begrudging admission by the political “losers” in budgeting that the decision makes sense.

After budget dissemination, there is still one potential pitfall remaining which many companies fall into. We call it Accountability Decoupling. It occurs when budget forecasts are merely “archived” or “put into a vault” and never used in actively measuring results. In big companies, a frequent practice is replanning after budget are passed out, these forecasts are the ones used for measurement. This is as good as setting oneself up for falling into the Forecast Folly trap in the following LRP cycle. Additionally, business leaders will know if their budget forecasts during LRP are not being tracked – this will encourage “gaming behavior” in next year’s budget process.

“Those who cannot remember the past are condemned to repeat it.” George Satayana was not talking about the annual budgeting process, but he could very well have been. Agylytyx has a lot of past knowledge and can help companies significantly improve their budgeting process, and avoid common budgeting challenges. Contact us for strengthening of your budgeting process. We often work hand in hand with finance teams. In rare cases, we have executed the budgeting process on behalf of clients.



Applying the Finance Led Process Lifecycle to LRP

Quadrant 3

This post is number three in a series which applies a methodology called the “Finance Led Process” Lifecycle to some of the potential challenges any corporate finance organization faces when leading an annual budget exercise, often called “Long Range Planning” or LRP. In this blog post, we are going to focus in on potential pitfalls which can occur during the “Consensus” stage of the Lifecycle. You may recall the Consensus stage essentially represents the execution of the Long Range Planning process. It begins when the LRP process is agreed to and finalized and ends when the final budget request data and accompanying benefit forecast (or forecasts if scenarios are used) is collected.

Not every potential pitfall in the LRP process can be effectively designed away, but a good LRP design will help head off many of the problems before they occur. At a minimum, an effective design can help a team recognize potential pitfalls as they start to occur so that they can take the necessary steps to remediate them. An effective LRP process will ensure that data is collected reliably, consistently, and in a timely fashion.

blog102There are some potential pitfalls associated with the Consensus stage. The names we have given them in our LRP Pitfall series are: Self-Fulfilling Prophecies, Manual Manipulation, Risk Homogenation, and Class Warfare. These provocatively titled potential challenges are very real – we have seen them happen and helped head them off.

The Self-Fulfilling Prophecy is the first potential pitfall to look out for when executing LRP. This potential pitfall is a like the guidance problem we discussed and even portrayed in our last post. It is much more insidious, however. A self-fulfilling prophecy occurs when the decision makers (usually the “executive team) dictates some strategic goals for the business, and defines carefully which initiatives fit into which goal. Sometimes this is even done with the help of the finance teams running the LRP process.

Consequently, budget requests reflect the corporate goals they are designed to represent, and teams are all positioned for maximum alignment with the corporate aspirations. To some extent, this is a good thing, since corporate goals should require some repositioning. However, there is a fine line between reorganizing to meet priorities, and reorganizing to “game the system.”

It is possible to avoid creating an incentive for this behavior. Corporate goals are to be embraced by the business, they should never be issued as constraints for planning. A long range planning process which creates an environment where the business leaders can show how their various requests are aligned with corporate goals will be a lot more conducive to showing what is possible to achieve.

Risk Homogenation occurs when firms do not adequately take risk factors facing their business into account. They might combine a lot risk factors into one score, they may instruct business leaders to take risk into account when they are putting together their forecasts, they might even ignore risk altogether. These approaches can hurt an LRP process that is trying to forge consensus in its execution.

A better design is to explicitly recognize risk factors which potentially plague a business, and provide some guidance around them. For example, a team may recognize internal risk factors such as execution risk and external factors such as price risk. In this case, relying on business leaders to calibrate risk scoring is not the right approach. Providing guidance with a specific scoring system for each risk factor is. In this way, an LRP Process can forge consensus in a way that avoids Risk Homogenation.

The potential problem of Manual Manipulation occurs when business teams formulate their budget requests in spreadsheet (often templates) and corporate finance teams consolidate these budget requests. In this capacity finance teams are often acting as intermediaries between business teams, and spend their time reissuing data instead of fostering consensus by analyzing data. Fortunately, this common problem in the consensus stage can be headed off in the Conception and Collaboration stages by embracing an alternative which automates the process of budget requests and alignment through viewable permissions.

The consensus stage of the LRP process can also lead to an outright problem we call Class Warfare. Class warfare is our shorthand for business leaders realizing that the LRP Process is inherently biased against certain initiatives. This problem frequently occurs when all initiatives are treated the same. To use a basic example treating revenue-generating initiatives the same way as non-revenue generating ones (such as IT investments).

Sometimes companies will use methods like EVA or even NPV to attempt to “level the playing field” between initiatives. Such methods invariably rely on unfair emphasis, such as disputes about the certainty of the time horizon involved. When business leaders feel that their initiatives are unfairly discriminated against, they will simply “adjust” their benefit calculations appropriate to accomplish what they feel is an equitable outcome.

While there is no perfect design, an LRP process which carefully defines initiatives in “buckets,” carefully defines the parameters of those buckets, and inspires confidence in these criteria can avoid this potential pitfall. As with the other potential pitfalls in the execution of LRP, proper design in the Conception phase and proper socialization in the Collaboration stage can help avoid the pitfalls which typically occur in the “Consensus” phase.



Applying the Finance Led Process Lifecycle to LRP

Quadrant 2

This is the second post in a series in which we are applying the Finance Led Process Lifecycle to the Long-Range Planning process, a name many companies use for the annual budgeting process which attempts to align execution with corporate strategy. In using the Finance Lead Lifecycle to look at the four potential pitfalls of Long Range Planning, it is worth nothing that the fall in the quadrant of the Lifecycle called “Collaboration.”

The Collaboration stage in the Long-Range Planning process is a particularly vulnerable one, because it is where finance begins to socialize the LRP process with the broader business constituency outside of the finance and strategy communities for the first time. blog100aRemember, the Collaboration quadrant is formed when a process is just being revised and formulated, but it also being exposed to the broader business community for their comment and feedback. At the beginning of the stage, the idea which has been formulated within the finance (and sometimes the strategy) communities is exposed for comment and revision. The goal of the end of the process in this quadrant, as the arrow in our matrix implies, is to form enough consensus around the process as revised that it is ready to be executed. We will discuss that quadrant in our next post.

While the process is still being designed as commented on, there are four potential problems to avoid during this redesign process. Of course, we want to avoid a complete redesign, that sends us back into Quadrant One – the whole point of our last post was to avoid common pitfalls in design so that we could avoid a complete redesign. Still, there are a few design pitfalls that commonly are suggested in this phase which are to be avoided.

First, in the design process, there are often design suggestions that can lead finance teams to incorporate an infinite (or at least very long loop). We call this potential pitfall a Moving Target. It occurs when a part of the business which works with another part of the business makes the observation that they are unable to accurately forecast or request budget until they know how much the other part of the business will invest and/or commit. The other part of the business may make that same argument.

Attempting to accommodate these requests may seem innocuous enough, but they can result in significant delays and a lack of confidence in the Long-Range Planning process. It is vital that during the Collaboration stage all parties agree that they will communicate and put forward a single request based on that interaction. The best way to achieve that outcome is to reinforce that the Long-Range Planning process itself be “blind” - meaning that groups can never see plans and commitments outside their group. Communication between elements of the business are vital and to be encouraged, but it is equally vital to emphasize that the LRP process is NOT the appropriate place for that communication.

The second potential pitfall to avoid in the Collaboration stage is the one we call a Self-Fulfilling Prophecy. This design pitfall commonly occurs when business constituents argue for narrow guidance in the budget process. At first blush this may seem like a good idea, and we are certainly not advocating for broad guidance, but that generally doesn’t happen.

During the Collaboration stage, business constituents will attempt budget end-runs, attempting to narrow budget guidance through clues. At this stage, each business constituent will usually “jockey for position” by arguing that their department is owed a great budget allocation. They will often ask for clues about the overall budget allocation by asking about things like “affordability.”

An LRP Business Process design which starts with overall guidance about affordability in each department ultimately guarantees that each department or division will hit that affordability level almost exactly.blog101b A process which is drawn up in such a way that these departments or divisions limits the hands of the business leaders and virtually assures an outcome. This outcome generally does not give the business much flexibility to meet business objectives. Ultimately this kind of design usually lead to problems like we see shown at left.

The best practice we have seen is to design an LRP process which includes guidance for multiple scenarios (usually three) – a high, medium and low case for each part of the business. This essentially asks business constituents what they would be able to achieve with a x% increase in budget or a -x% decrease in budget. Then, reasonably narrow guidance can be given for the medium case scenario, ensuring that the Self Fulling Prophecy can be avoided, and making business constituents feel the requisite Collaboration has been achieved.

A third design pitfall which should be top of mind during Collaboration about the LRP Process is to avoid building in too many measures, one which we called Objective Obsession. It seems ironic that the business would ask for specific criteria. A business leader who asks to be measured seems like a good thing. It is, but overdesign of measurements, especially in an LRP Process, can be damaging.

An LRP Process itself which enforces strict measurements is no better than one which offer specific guidance. For example, business leaders who influence the LRP design process to include very specific metrics can effectively “game the system” but specifying an outcome based on budget requests that is designed to maximize whatever metrics are used in scoring, be they related to customers, transaction sizes, retention, revenue per employee, or any number of other commonly used metrics. Avoid this pitfall by providing metrics which are tied to budget request completion and timeliness. These are effective measures of an LRP Process in most companies. When Collaborating on the process of creating measurements related to budget requests, focus the discussion on metrics related to the process itself.

There is a very important fourth pitfall to avoid in the Collaboration quadrant. We call it Data Inequality. It occurs when an LRP Process is influenced to take place in a vacuum. We have already mentioned the fact that the Long-Range Planning process is not for communication between business leaders on their specific requests about joint initiatives or projects or corporate goals which affect each other. This communication is best left outside the LRP process. However, there is one important aspect of sharing information which can and should be suggested during the Collaboration stage, it is that of data sharing for Collaboration.

When business constituents (as they probably will during this all-important Collaboration phase) argue that they cannot effectively make their budgets without knowing what another part of the business intends to do, this is an appropriate time to suggest a sharing session designed to calibrate data. This is not to say that the sharing session needs to be designed to provide each group visibility into the data of another group, that is NOT the intention.

The intention of these sharing sessions is to make sure that everyone is measuring their investments in the same way, recording the forecasts in the same way, etc. Designing these sharing sessions into the process during the Collaboration stage serves two purposes: to create confidence that sharing will occur, and to increase confidence in the results. The most common thing we hear during LRP is a complaint that another group is not being fair or honest in the reflection of their requests of benefits. Sharing sessions which allow everyone to be on the same page can head this problem off before it happens.

In our next post we will examine some of the pitfalls of Long Range Planning which can occur during the Consensus phase of the LRP Process Lifecycle, and we will take look at how to head off those problems during the execution of the process.



Applying the Finance Led Process Lifecycle to LRP

Quadrant 1

In our last post, we mentioned that we had just completed a series on the pitfalls companies frequently fell into when conducting their Long Range Planning (LRP) exercise. We also mentioned that we were currently embarking on a series of blog posts designed to apply a methodology called the Finance Led Process Lifecycle to those Pitfalls to help companies learn how to plan to avoid them. Although each pitfall carried with it a specific method for avoiding that challenge, the Finance Led Process Lifecycle is designed to look comprehensively at those potential pitfalls from the outset, so that all of them can be avoided comprehensively.

Recall that the Finance Led Process Lifecycle, was a two by two matrix where the horizontal axis was a ranking of finance involvement from low to high, and the vertical axis was the amount of corporate involvement, from low to high. The first quadrant, composed of low finance involvement, and low corporate involvement. blog100We call this Quadrant the “conception” stage because it is where Finance-Led Process originate. This is Quadrant is where finance team begin to think about the process they will lead, and where they begin to design it.

Considering the nature of the Long Range Planning process, one could easily posit that all fifteen potential pitfalls could be avoided with proper design. That would, at first blush, seem like a logical conclusion. What Agylytyx has learned firsthand is that is NOT the case. Unfortunately, sometimes even the best designed LRP process can still fall into potential pitfalls if not carefully guided throughout its execution.

Still, there are some common challenges which can be headed off with effective design. We would strongly urge anyone developing a Long-Range Planning process to design the process correctly during this critical Conception stage. Doing so can save a lot of time and heartache later in the process. During this stage, consider the following.

There are three key pitfalls to avoid while in the design process. These are 1) premature destination 2) goal misalignment, and 3) tool misapplication. Each year when finance teams are preparing to lead the organization through a Long-Range Planning cycle, there are three critical challenges which adequate preparation and the right design can prevent.

The first of these is avoiding what we at Agylytyx have come to call the “premature destination.” The name is what it sounds like – designing a process and issuing guidelines for that process which are so narrow as ensure that a certain outcome is guaranteed. This potential pitfall can and should easily be avoided at the design stage of the process, therefore it fits in the first quadrant.

When issuing guidance for engaging in LRP, be sure to craft your messages in such a way that you do expect an outcome. It is always preferable to hear these outcomes from a business constituent. Even if the outcome is one your team already expected, it is better to let the business bring the idea forward. In this case, they are much more likely to take ownership of the idea. We mentioned in our post on premature destination the idea that guidance could take the form of scenario planning, soliciting business feedback from leadership on multiple alternate funding scenarios or at least providing them that option. One thing is clear: adequate design of the LRP process involves creating guidance to engage that process which does not presuppose an outcome.

We also mentioned a potential problem with Goal Misalignment. This potential pitfall should be in ever quadrant – as we mentioned in our post aligning the LRP process with corporate goals is something that needs constant monitoring because it can go off track at any point. The reason we put it in Quadrant One is that that is a very clear need for a finance organization to resolve, from the very beginning of the design process, that corporate goals will play a key part in the process.

We also mentioned a couple of possible approaches which are worth considering at the design stage such as including corporate presentations in the same environment used for creating LRP input, and requiring the linkage of that input to specific corporate goals. These are two examples of ways that finance team can and should link LRP specifically to corporate goals. If the emphasis on corporate goals is not baked into the design stage, we have never seen it effective incorporated into LRP.

We mentioned another critical aspect of Long Range Planning which must occur in this Conception (Design) stage. That is using the proper tool for the Long Range Planning job. Too often Agylytyx has seen very large companies who aren’t willing to invest here. Often, companies will invest a lot of money in their ERP system and the consultants who implement them, but they are unwilling to invest what often amounts to 1/10th the size of that investment to implement a tool and/or consulting team that is purpose-built for Long Range Planning.

Instead, companies will try to use their existing ERP investment to conduct Long Range Planning exercises. We have never seen that work. We have seen these clients revert to using the tried-and-true methods of creating elaborate templates in spreadsheets. The solution is simple: using the right tool for the job gives finance more control (heading off a lot more potential pitfalls) and makes the entire process much faster, easier, and more flexible.

In our next post, we will look at the potential pitfalls companies face in the “second quadrant” of the Finance Led LRP Process lifecycle – the Collaboration stage.



Applying the Finance Led Process Lifecycle to LRP


Back in November, Agylytyx publicly disclosed a methodology which we and our clients have found helpful. We call this methodology the “Finance-Led Process Lifecycle.” blog99aIt is a simple two by two matrix useful for analyzing and predicting the success of any process that finance will lead across a company. The methodology is particularly helpful for finance departments in large companies who often are called upon to lead cross-company process which involve constituents from multiple lines of business and who also have decision makers in a corporate environment which reply on the output from such processes. It is worth reviewing that methodology to understand it fully and how it is applied.

These types of processes happen often in large companies, and they usually play a large and important role in linking strategy to execution. Those of us who spend our time with corporate finance departments in large (and even many midsize) companies are not surprised at the frequency or impact of these types of process. These might include the development of budgets, analytics, reports, investor presentations, quarterly reports, portfolio reviews, long range plans, scorecards, dashboards, and a lot more.

One of our first public applications of this Finance-Led Process Lifecycle was to analytics. blog99bIn this series of blog posts, we mentioned that Agylytyx was increasingly seeing situations where finance departments were the “owner” of the analytic processes within a company. As usual, with great responsibility comes great expectations. Since other business constituents were (or should be) involved with this process, it fit the classic definition of a “Finance-Led Process” and was therefore an ideal candidate for analysis using this methodology. Given the emerging importance of analytics, especially in a company’s strategic direction, Agylytyx was able to apply the methodology to provide a lot of valuable insight to finance departments in support of the design and execution of this process. The ultimate result is depicted here and you can read a full explanation of this application of the Finance-Led Process matrix here.

The Long-Range Planning process falls into the same categories. It is a Finance-Led Process. It relies on cross company input. It results in a very important set of outputs on which key corporate decision makers (usually including the CEO) rely. Correct design of LRP can result in a significant increase in the value of the process.

Agylytyx just finished a series on the potential pitfalls of Long Range Planning in which we looked at potential problems, their diagnoses, and their impact. We even looked at what some of our clients have done to address each pitfall. In this next series, Agylytyx will use the Finance-Led Process methodology to address each of these potential LRP Pitfalls, including how to head them off by anticipating and even designing them out of the realm of possibility.



The 15 Pitfalls of Long Range Planning


blog58aAgylytyx just concluded a series on the some of the common challenges big companies face in their annual planning and budgeting exercise. Most commonly this is called Long Range Planning (LRP) process. This common set of observations comes to us from both in-house and consulting experience. In this series, we explored 15 sets of common challenges facing those departments (usually corporate finance, occasionally corporate strategy) during their LRP activities.

These posts each followed the same format:

Describe the Pitfall

Identify the Early Warning Signs that this Pitfall is Starting to Occur

Explore the Potential Impact of this Pitfall

Introduce Ways to Head Off This Pitfall

The entire series is all available on our blog, including a post corresponding to each of the 15 Common Pitfalls which Agylyty identified in the LRP process. blog58bThe entire series is available free in one white paper from Agylytyx upon request.

Agylytyx has also published a methodology for analyzing Finance-Led Processes. We have found this matrix helpful in improving the likelihood of success of any Finance-Led Process Because LRP is typically a finance-led process, it is appropriate to use the Finance-Led Process lifecycle to analyze the 15 Pitfalls of the LRP process.

Next, we will embark on a new series of blog posts designed to use the Finance Led Process lifecycle to explain when in time these potential pitfalls are likely to emerge. The application of the lifecycle will also identify how to either prevent them in design, nip them in bud if they do emerge, or at least mitigate their impact if they do emerge.

Agylytyx experts are also available for consultations on your long-range planning process or your budget-strategy linkage. Although every situation has unique nuances, there is very little we have not seen in these processes many times before. Feel free to reach out to us to compare notes.



The 15 Pitfalls of Long Range Planning

Common Pitfall #15 Tool Misapplication

Assuming an ERP system or module attached to an ERP system will be sufficient to handle LRP or model LRP data. Trying to fit a square peg in a round hole by trying to apply or extend existing tools to handle the process. Force fitting systems, having them commit unnatural acts.

The problem.blog97a

Many companies make the mistake of assuming that some tool which was not designed for the planning process can be used for the purpose. This is especially tempting because companies have invested so much money in systems and tools. This is complicated by many tool vendors who attempt to position their application as a solution for a problem that it was never really designed to address. Many companies spend a lot of additional time and effort in resources attempting to make tools commit unnatural acts in support of their planning process.

The symptoms.

Often tools are misapplied under the guise of “avoiding duplication of effort” or “using what we already have.” In these situations, vendors will often present additional “modules” which a company “already owns the license to.” These kinds of statements are generally red flags which indicate that a company is considering or is already committed to using the wrong tool for the planning job.

The impact.

When the wrong tools are used, the entire process is jeopardized. Most often, the process will be redesigned to reflect whatever capabilities the systems will accommodate. This often means a key part of planning is simply left out. If attempts are made to accommodate this phase of planning, usually it means reverting to a manual process. Such a last-minute reversion is often worse than skipping that part of the process entirely – a botched job is worse than no attempt because of the illusion of objectivity it creates. Processes that can and do suffer in these circumstances are alignment, optimization, calibration, and scenarios. The impact of each of these has been discussed elsewhere in this paper. Unfortunately, using the wrong tools risks creating one or more of these impacts simultaneously.blog97b

The solution.

The only solution to using the wrong tools is to use the right ones. Many companies still use manual processes to approach the planning process, typically relying on spreadsheets. While this approach is not a best practice, it is a better practice than attempting to apply another tool which isn’t right for the job. There are tools which really can help automate the planning process, because they were designed for that task. Finding and applying these tools represents a best practice. Automating the planning process using the proper tools can help facilitate alignment, calibration, optimization, corporate goals alignment, and a lot more. In fact, automation is mentioned several times throughout this paper as a best practice solution which avoids many common pitfalls in the planning process.



The 15 Pitfalls of Long Range Planning

Common Pitfall #14 Goal Misalignment

The problem.

Optimizing return on invested capital without a view for whether that set of investments takes the company where it wants to go strategically does not make a lot of sense. blog96aSaid another way, the most efficient financial return is not always the best business solution. An efficient planning process that is disconnected from corporate goals is like a ship with a powerful engine where the rudder is just a little bit off – the journey may be swift, but the destination will be inaccurate. To extend the analogy, a planner may be swift in the execution of the planning process, but if that process is not aligned to corporate goals, a company may find itself completely off course at the end of the journey.

The symptoms.

This pitfall is insidious, because it may not be immediately obvious. There is no single signpost which can help determine if goals are aligned or not. Further, because corporate goals are often stated as objectives of the planning process from the outset, and because business owners are often assumed to keep corporate goals in mind, there is usually an assumption that that the processes are linked, when in fact they are not. In fact, usually the opposite is the case. Top business owners, in their haste to plan their business performance, usually do not think innately in terms of corporate goals.

Because it is so important to link the process to corporate goals, because there no obvious symptoms, and because few companies have linked these two, this situation is a rare example of “guilty until proven innocent.” A determination of whether or not a planning process is in sync with corporate goals usually has one of the solutions identified below linked in to the process. If one of the solutions below is not in place, it is a safe bet that the process is out of sync with corporate goals.

The impact.

The consequences of not having planning processes synced up with corporate goals are self-evident. The only way to attain corporate goals is to execute on them, and execution starts with budgets which are an outgrowth of the planning process. If the planning process is not aligned to corporate goals, they by definition cannot be completely obtained (if they are, it is by accident, not design).

Over the long term, results vary. In some companies, corporate goals are revised to reflect the direction that the company is taking. In this case, companies essentially cede control of their corporate governance to their management execution. This means that a company’s strategic guidance and vision it provides publicly will appear to falter. In this case, senior management is often sacked, and an entirely new governance structure is put in place. In other companies, it is the executing executives who pay the price for failing to meet corporate goals, leaving others in charge to re-plan a solution which will bring the company more into line with the goals.

Either option is not good for a company. Some companies do not survive the resulting turbulence. Those who are fortunate enough to survive continue to live with the consequences of a planning process which is out of alignment with corporate goals.

The solution.

Linking corporate goals and planning requires persistence and determination, but the rewards are high. Truly making the link requires getting employees throughout the organization to think that way. blog96bAs with any other behavior change, this requires repeated emphasis. There are some best practices which can help. One large logistics company puts a presentation which includes their corporate goals into the online team rooms where there planning process is managed. In this way, each employee involved in planning knows that corporate goals are top of mind.

Another best practice is to actually create interlocking linkages to corporate goals, essentially building a requirement into the planning template to express every resource request in the form of a corporate goal. This approach has the added benefits of providing visibility into the cost and benefit associated with each goal, and to measure and track expressed attainment.

Another best practice for planners is to constantly review the requests for consistency. Ultimately, planners themselves are responsible for alignment of the requests they receive to the corporate goals, and it is a planners’ responsibility to challenge data providers when they cannot see an obvious link.

Yet another best practice involves building the corporate goal alignment review into planning meetings or checkpoints. The need to have sharing sessions between elements of the organization has been discussed already – inserting a requirement to review an organization’s alignment with corporate goals puts the onus on business owners to draw that linkage.



The 15 Pitfalls of Long Range Planning

Common Pitfall #13 Self Fulfilling Prophecies

The problem.

Self-fulfilling prophecies generally require the unconscious involvement of two parties to the planning process: the decision makers and the planners that support them. blog95aThis situation develops when decision makers feel that they have good instincts about the right direction for a business. It also requires planners who can take the initiative in the planning process, not just coordinating it, but controlling it. In some companies, this amounts to defining the expected outcomes and then imposing them in the guise of a “planning process.” Often, companies in this mindset could simply skip planning altogether and proceed directly to budgeting.

The symptoms.

Companies who essentially define what they want to receive will usually hear about it from business owners. For example, a business owner may ask about the availability of incremental funding or inquire about the appropriate way to submit additional ideas for funding. Many companies will even develop a special fund like an “innovation fund” which is generally separate and apart from the planning process. While such a fund may be a good idea, keeping this fund disconnected from the other parts of the business in the planning process can result in strategically disconnected business concepts. Many times companies with self-fulfilling planning prophecies will also have planning cycles that are so short they seem to defy logic. A two week planning process, for example, is hardly a planning process at all, it is really a budgeting exercise.

The impact.

Companies who have developed self-fulfilling planning processes usually are poor long term planners. For this reason, these companies typically find themselves having to frequently revise their growth targets. blog95bWhile they may execute well at a gross margin and operating margin level, they will rarely achieve the topline growth for which they plan. This is because they prefer “business as usual,” often issuing only minor adjustments to functional efficiency and operating targets.

The solution.

Those companies who define their own self-fulfilling prophecies need to encourage their business owners to think more broadly about what they could achieve. Usually, this means granting business owners the latitude to create different scenarios for their business. A best practice in this case is to define the parameters of separate scenarios for each business owner. This method does not need to expand the planning process timeline dramatically. Typically, business owners can create scenarios for spending more or less money almost as quickly as they could create a “business as usual” scenario. The expansion in the planning process is worth it. Incorporating innovation into the planning process will help ensure the identification of growth opportunities which are realistically achievable and continuous with the existing business. Many firms find that automating this process will also help memorialize these scenarios and then use them to analyze optimal spending. By encouraging business owners to think about the way scenarios impact their business, planners can retain influence in the planning process without making their planning a “self-fulfilling prophecy.”



The 15 Pitfalls of Long Range Planning

Common Pitfall #12 – Moving Target

The problem.

The moving target problem occurs most frequently when a company has many groups who are dependent on each other for numbers. This is especially true in large matrixed organizations, but it can happen in smaller organizations as well when those firms have multiple business units and functions. In this environment, a moving target situation develops when numbers changing in one place necessarily results in changes elsewhere in an organization. The process then feeds back on itself, because changes in one place necessitate changes in another place, which in turn creates changes in another place.blog94a

Consider the situation where a company has four business units (let’s call them A, B, C and D) and four central functions (let’s call them engineering, sales, operations, and finance). Functions generally have budget targets that they operate within. When one business unit increases their resource requirements on one function (let’s say A tells engineering they need more headcount), engineering is forced to tell another business unit (let’s say B) that they will be able to provide less heads than they originally anticipated. Because B now has less engineering headcount than they thought, they may have fewer products to sell, so B may reduce the amount of headcount they report that they need from sales. However, sales has revenue targets, and those revenue targets are often tied to individual quotas, so if they want to make their targets, those persons will need to find another business unit to support. For this reason, sales may now tell Business Unit C that they have deployed additional sales headcount. Even if Business Unit C is happy to have the additional sales firepower, they may still be concerned about reaching their OPEX targets, so C may tell operations and finance they will need to do more with less people. And the situation goes on and on...

The symptoms.

Some of this kind of dialog is healthy. It is organizational alignment; it is a key component of effective planning. When these discussions become too extended, they become counterproductive. They can actually prevent the true work of analysis and optimization of a portfolio. Warning signs often include missed deadlines. Associated comments include things like “we’re still waiting for information from X” or “we are late because X kept changing their numbers.” When a process is manual, warning signs will often include extended iterations of planning. Planning processes that involve more than three iterations of numbers, guidance, and deadlines are absorbing too much time and getting bogged down in detail.

The impact.

When planning processes get confused with alignment processes, both can mire down, so neither is ultimately successful. When alignment breaks down, there is usually a disconnection perpetuated between different parts of an organization.blog94b Generally, when this misalignment happens, organizations will “move resources around” from other parts of their budget or from future quarters, in order to “make their numbers” or “meet their targets.” Usually, this means that organizations will miss the quarters later in their fiscal year (3rd and/or 4th).

As well, extended planning cycles mean that business owners are often bogged down in planning and alignment for too long, and often neglect important elements of their business. When this occurs, companies often experience much poorer than forecast results during the quarters of the planning process. Surprises in business results during the planning process can be the result of a process which takes too long and is too detailed.

The solution.

There are at least two best practices to consider here. Firms that reduce the cycle times involved in the planning process typically provide more direct guidance at the outset of the process. Without being so specific with guidance as to dictate a “premature destination,” companies which provide a realistic envelope from the outset of planning can significantly reduce cycle times.

Another best practice is to set expectations for each cycle in the planning process from the outset, with an expected outcome. For example, a planner might indicate that the “first round” of planning will last two weeks and those constituents in the process should be within 10% range of their requirements for other parts of the business. The planner would also define the expectations equally for the second and third round. Finally, a planner might specifically state the end goal of this last round in planning, emphasizing a goal or target range of alignment. In this case, the planner would note that the actual and real alignment would be dictated by the budget process itself. This will help set the mindset among participants that the goal of planning is to optimize investments in the portfolio, not to achieve perfect alignment.

Finally, companies who automate the planning process can significantly reduce cycle times and increase alignment within the organization while providing optimization tools at all levels of the portfolio. A best practice solution involves a central repository for data where each element of the organization is able to access information entered by another element of the organization which impacts them. This way, as changes are made, pieces of the organization can react to those changes real-time. Using this type of configuration still requires cycle times, but it also frees up the business owners’ time to be more productive – they can spend their cycles actually analyzing and optimizing their expenses, instead of merely reallocating dollars.



The 15 Pitfalls of Long Range Planning

Common Pitfall #11 – Data Inequality

The problem.

Many companies find themselves with data sets that do not paint equal pictures. This can occur when business owners don’t use the same sets of assumptions or data points when they complete their forecasts. blog93aIt happens more often with revenue than with cost of goods sold or expense data, because the easiest way to paint a rosy picture for a project or set of projects is to portray a “hockey stick in revenue” – especially when there is little accountability in the outcomes. Usually, cost of goods sold data and operating expenses like headcount are published by a company or its functions.

The symptoms.

When completing planning information, often business owners will ask “how are other people answering this question?” These types of questions may express legitimate concern for making adequate comparisons across information in the planning process. If business owners have concerns that others might be inflating their answers at their political expense, they may make statements like “We try to be as realistic as possible when answering these questions” or “We don’t just tell you what we think you want to hear.” These kind of statements are a red flag that there are concerns that the planning process is not adequately calibrated. blog93bUsually, there is fire where this kind of smoke exists, so calibration should become a major concern.

The impact.

For obvious reasons, data which is not accurately calibrated can lead to skewed investment decisions. Many times, decision makers will attempt to remedy this lack of calibration by attempting to compensate for the skew, inherently discounting some data, trusting other data, and even inflating data which they think may be too conservative (yes, this happens too). When decision makers inject themselves into the planning process in this capacity, the result is generally little better than “decision by instinct” – significantly undercutting the purpose of planning in the first place.

When data isn’t adequately calibrated, teams that have painted a more rosy scenario for their projects (whether intentional or not), generally receive an inequitable share of the invested capital. In this case, collaboration between groups can break down due to political resentment, pressure to meet an impossibly optimistic scenario increases, and often targets/plans are revised. The result can be a capital whipsaw which reallocates resources mid-stream. In this case, generally no one makes their original forecasted plan.

The solution.

Calibrating data is not always easy to achieve, but a company that makes a commitment in this area can make it happen. When it does, confidence and participation in the planning process itself improves. There are at least four best practices worth mentioning.

The first is a sharing of information. While the data itself is often confidential, groups that share how they made calculations can help each other improve their processes. Planners can help coordinate this information sharing by identifying best practices in forecasting, and scheduling sharing sessions.

Another best practice involves guidance. Planners are generally in a good position to provide guidance for how questions should be answered. For example, planners can publish guidance for a scoring tactic such as “if X occurs, would you expect your forecast revenue to be a) 3% more or higher; b) 1-3% higher; c) about the same; d) 1-3% lower; or e) 3% lower or more.” When objective criteria are provided to all the constituents of the planning process, planners dramatically increase the odds that answers will be calibrated.

Another best practice is to automate the planning process. This is based on a very human instinct. For some reason, excel templates will generally draw out a broader range of answers than a centralized repository will. Central repositories tend to make us think more formally and in a more structured, rigid way. Excel templates often encourage answers that are less formal and have a greater range to them. As well, planners will have more immediate visibility across results with a central repository than they do with a template, and those completing the information know that. For best calibration results, use a central repository.

Finally, planners should not be afraid to question responses. Sometimes the only way to truly make sure that results are properly calibrated is to talk with the business owners who provided the forecasts. In this way, it is possible to see if the mindset and approach of those who provided the data are truly similar. Especially when data seems anomalous, planners should not be afraid to discretely ask about assumptions used to derive the data. Sometimes, those providing the data may even have made an error which only a planner could catch. Other times, a planner may learn about an approach which could be characterized as a best process and shared back with other groups.



The 15 Pitfalls of Long Range Planning

Common Pitfall #10 – Premature Destination

The problem.

Some companies put the cart before the horse when it comes to their planning processes. blog92aRather than solicit input from business leaders about potential funding scenarios, these companies provide specific financial guidelines to the business leaders in order to expedite the planning process. These companies typically have finance-driven planning processes with rather static portfolios. Because they tend to “play it safe” by keeping business leaders on a tight leash, they rarely rebalance investments across business units. In effect, the company “roadmaps” the destination to its business leaders at the outset of the LRP process itself.

The impact.

Because these companies rarely rebalance their investments, their portfolios tend to be fairly static. Since most business leaders will choose to spend their budgets on “keep the lights on” type of activities, often companies in this trap will have rather low “innovation” tendencies. The result is that these companies will often fall behind their competitors. This is especially problematic in very competitive marketplaces. These companies also foster a business climate which rewards those who do not take risks because they become complacent in their “business as usual” approach. In the long term, these types of companies will experience deteriorating business results.

The symptoms.

Companies who fall prey to the premature destination problem have telltale symptoms. Almost all companies start their planning process with some kind of window of guidance, but some are far too rigid. How much is too much to start with? blog92bUsually a company that issues budget constraints based a percentage of a previous year’s spend (taking a “peanut butter” approach) across all business units is predicting the outcome of their process before it even begins. Business leaders in these environments often say things like “can we just drag and drop our plans from last year?” or they may ask “is anyone getting anything different?”

The solution.

At the outset of the planning process provide guidance to business leaders which will encourage them to explore. If you have to issue some type of guidance at the outset of the process, couch guidance comes in the form of scenarios, such as “what would you do with X% more funding, the same funding, and X% less funding. Treat business leaders as owners in the planning process, rather than asking them to go through a finance budgeting exercise. While this may make more work for finance, the mentality will cascade throughout the business unit.



The 15 Pitfalls of Long Range Planning

Common Pitfall #9 – Accountability Decoupling

The problem.

Most companies have this kind of problem. Many companies do not track their long term forecasts to actual business results. Few companies track the outcome of long range plans. For this reason, there is little incentive to ascertain the validity of long range forecasting. Absent accountability for data that is provided, “gaming” behavior is encouraged. Business leaders know that they can “hockey stick” their revenue or bookings projections for out years in order to obtain more operating expense. Since they know they will not be held accountable for future results, projecting deferred revenue carries no penalty.

The impact.

Business leaders will determine immediately which metrics are used and which ones are not. Not holding business leaders accountable for their projections leads to poor discipline in portfolio decisions. Pet projects get incubated, projects are hard or impossible to kill, and company performance suffers. Usually this problem manifests itself in revenue and OPEX “misses.” blog91aOften companies will incur duplicative charges for capital items, since there is little or no incentive for different parts of a business to work together.

The symptoms.

Companies experiencing these pitfalls will usually have business leaders whose feedback on the planning process runs the gamut of emotions. Some will want to hedge the data they provide, and will say things like “I’m not really sure about these forecasts.” Others will ask “what decisions are being made based on this data?”, which is often a way of determining whether or not and to what extent to game the system. The more astute and experienced business leaders may directly ask “how do you plan to track this information?” or complain about forecasts made by other business units as being unrealistic. One of the most telling signs that the long range forecasting process has become unreliable is that decision makers no longer put faith in the data. blog91bOften they will directly state that they don’t have confidence in future outcomes or projections.In this case, decision makers will use a very limited time horizon on which to build their decisions.

The solution.

Corrective action is easy to prescribe, but may be difficult to implement in this case. The most obvious solution is the one most often overlooked: track long term results. Most companies do not track long term business forecasts. However, tracking is not enough; companies must also reward people who project well. Usually, this means tying some portion of compensation to ability to a project and to long term results. Few companies actually do this, but those who do experience better long term performance, for obvious reasons – they foster a culture and cultivate leaders with a long term vision.



The 15 Pitfalls of Long Range Planning

Common Pitfall #8 – Objective Obsession

The problem.

Some companies get carried away with scoring. These companies spend a lot of time thinking about how to quantify almost all aspects of their business. Everything from corporate goals to department culture can and have been translated into numerical values. Many companies who fall into this pitfall are using numerical scoring guides as a substitute for difficult qualitative discussions. Still others find that they ask for more data than they can possibly produce or sift through. In these cases, critical resources may be filling out forms or templates at the expense of their business productivity. Still other important analytical resources are spending most of their time accumulating and sorting data, and insufficient time actually analyzing the data for results. Ultimately, when data is accumulated and sorted, decision makers in these environments typically find themselves in an information overload situation – there simply are too many numbers and scores for them to make a real business decision.

The symptoms.

Complaints voiced within companies who are developing an obsession with objective criteria are often numerous and contradictory in nature. blog90aOne common passive form of response is simply to provide incomplete data. Another common question that is often asked is “how are other businesses going about completing this information. Other persons may indicate a concern with potential calibration, making remarks such as “We are being honest about our evaluation” or “I wouldn’t necessarily trust other responses, they tend to be exaggerated.” Decision makers in the planning process may passively and politely accept the data, but not provide any real indication that the data was used in decision making. A more aggressive response from decision makers may question the validity of the data itself (“how did you come up with these numbers?”), or even openly refuse to accept data after a certain point, indicating that “they have enough data to make a decision.” If the company does continue down this path over the long term, the scores will start to normalize to the point that they become similar enough as to no longer be useful in distinguishing between projects. This is because data providers will engage in gaming behavior and/or get the scoring and calibration guidance changed in order to gain better evaluations for their type of projects.

The impact.

Ironically, objective obsession can be one of the most insidious and harmful roads a company can pursue in its planning process. In the short run, companies that over-quantify will find their lines of business deteriorating during their planning process, as business leaders spend too much time allocated to the planning process. Instead of being a productive light-weight exercise which feeds into proactive budget formulation, the planning process becomes an encumbrance which weighs on business leaders and drags down corporate performance very quickly. Worse, decision makers may alter critical budget allocation decisions from their impulses based on data scoring models which don’t accurately reflect the state of the business – usually a gut instinct would even be more accurate. In this case, corporate performance will suffer in the year to come – sometimes dramatically. Finally, over the long term, a company which “sticks to its guns” in the quest to relegate almost everything to scoring models, will find that it’s decisions become based on scores which may vary by 1% or less, ignoring the “margin of error” rule. This company will encourage gaming behavior by its business leaders by sending a signal that its leadership considers the planning process more important than actual business results. This is a strategy for going out of business, yet it’s a road that many companies continue to pursue.

The solution.

Scoring isn’t bad, and a drive to data purity isn’t bad either. There are times when such information is vital to the development of a strategy. Most strengths are weaknesses when they are overextended, and quantification is a perfect example. blog90bWhen organizations start to exhibit some of the signs mentioned above, it is time to take action toward putting a solution into place. Unfortunately, it is hard to put this genie back in the bottle. There are a couple of reasons for that fact. The first is that a retreat from asking for certain information risks sending a signal that the business metric associated with that information just doesn’t matter anymore. This isn’t always true. Just because a quantitative metric isn’t one which is appropriate to be used by decision makers as a way to evaluate businesses does NOT necessarily render the metric irrelevant to each part of a business. For example, the metric “lifetime customer value” may make sense as a business metric to a certain division like a services organization. However, it may not be a metric that decision makers use to evaluate things like projects, technical spending, etc. So one key solution to an objectively obsessed planning process is to pick the metrics that matter, and then communicate clearly and openly about the ones which aren’t necessarily being tracked at the corporate level anymore and the reasons why. It is vital to know that metrics are in the context of corporate success and are not just metrics for metrics sake.



The 15 Pitfalls of Long Range Planning

Common Pitfall #7 – Risk Homogenation

The problem.

blog 89aFirms have varying strategies for dealing with risk involved in a business. Some of the most simplistic approaches include assuming that risk is baked into forecasts or the financial metrics (like the discount rate factor in NPV for example). Other approaches which do involve some risk analysis have approaches which treat risk as a homogenous factor. Some of these approaches include treating risk as a factor to be financially analyzed (the classic financial “risk management” approach) and even approaches which ask business leaders to quantify risk involved in their business. By treating the subject of risk as a single abstract entity, none of these approaches add value to business leaders or decision makers by helping them really understand the source and composition of risk along various vectors throughout the portfolio. This is reducing risk to a single score, and usually involves a lack of real analysis in calculating the risk score. Think about the different types of risk – competitive risk, technology risk, demand risk, execution risk, etc. Either it’s done centrally by persons who may not have the right level of visibility or it’s decentralized with no guidance for calibration purposes. Without adequate stratification of risk, there is the chance that all risk will be concentrated in a single type of risk.

The symptoms.

Firms which suffer from a unified view of risk often have difficulty calibrating risk scores, and usually question whether or not they have fully assessed all aspects of the programs or projects in their portfolio. The first challenge usually manifests itself in questions such as “how do I rate the risk inherent in one project versus another?,” “are we relying on self-scoring here?” or “how can we be sure everyone is answering this question the same way”? The second challenge (not fully articulating all the various types of risk) usually manifests itself in questions about the type of risk itself. For example, business leaders may ask “how are we accounting for potential competitive pressures across the portfolio?” or “aren’t some investments more exposed than others to potential disintermediation?” or “some of these investments seem right in our wheelhouse, but aren’t some of these outside of our expertise?” A truly diversified view of risk manages and measures each of these independently, and is ready of give an account for the approach taken to each of the challenges mentioned about.

The impact.

Firms which do not have a comprehensive view of risk tend to have investments which are either uniformly conservative (i.e. they keep the firm from embracing too much risk), or they often concentrate risk in a particular area, leaving the entire firm undiversified and exposed. For example, a company which has not taken enough risk may have a profile that emphasizes short term returns – a strategy which leaves the firm fighting each year to find projects that can bring incremental growth. These companies usually have lower rates of innovation, and may find themselves out positioned by in the market. Companies which do not adequately describe various kinds of risk may not recognize or acknowledge the fact that their risk is concentrated in a particular element. For example, the company that does not adequately acknowledge competitive risk may find that their portfolio selection has left the company as a whole vulnerable to competitors. Firms that do not explicitly acknowledge execution risk as a factor may end up with an unbalanced portfolio which leaves the firm stretched too thin. These companies usually cannot sustain all their investments, and end up falling short of financial performance, usually in the last quarter of the fiscal year.

The solution.

Different types of risk need to be acknowledged, but they also need to be quantified in a meaningful way, and each approach needs to be calibrated. Explicitly understanding the different types of risks facing the firm means honestly thinking through all the possible elements of exposure a company may have. blog89bFor example some common elements of risk include execution risk, market/demand risk, competitive risk, technology risk, price/supply risk, price pressure risk, political instability risk, and economic risk. Not all of these risk factors apply to every firm, and this list is certainly not an exhaustive one. However, any risk factor identified needs be defined and thought through in the way it impacts a particular project. Further, guidance must be issued on each relevant risk factor to help assure that the evaluations are similarly calibrated across business units, functions, etc. A system which automates the process of posting guidance, calibrating scores, and capturing those scores, is usually vital to achieving a solution here. This approach will enable the ability to create risk profiles for various funding scenarios – an aggregative view of the types of risk which would be faced by such a permutation of projects. For more information on appropriate quantification and calibration tactics, please contact Agylytyx directly.



The 15 Pitfalls of Long Range Planning

Common Pitfall #6 – Pragmatic Profiling

The problem.

At its face, there is a temptation to reduce all projects to some financial metric. Expressing a project in terms of its NPV or EVA often ignores subtleties which exist in projects. For example, under a strictly financial approach, there is little or no consideration given to project interdependencies, balance within a portfolio, risk profiles, forecast uncertainty, timing of profitability streams. The danger then is that a common financial metric may result in a funding profile which is not the most desirable combination of projects for a firm.

The symptoms.

Firms experiencing problems with pragmatic profiling will often hear business leaders ask business related questions about recommended funding distributions. They will often point out that funding Project A does not make sense without also funding project X. blog88aBusiness leaders will often express frustration with financial metrics around their projects and will try to introduce business concerns into the metrics – for example, in the case of NPV, they may argue that discount rates should be changed on particular projects in order to better account for lower and higher risk.

The impact.

The impacts occur whenever organizations apply a similar standard to all projects, and therefore do not make adequate comparisons because they treat all projects in the same way. Organizations which rely too heavily on financial metrics for planning processes will usually skew their decisions over time to those projects which show the highest financial returns, often at the expense of their ability to execute projects. Some of the reasons this can happen are obvious, but many are not. For example, a strictly financial/quantitative approach toward project selection often leads organizations to focus on short term returns at the expense of long-term investments. They may also find that funding decisions do not take into account the potential risk profiles, especially execution risks, within a company. blog 88bFirms relying too heavily on a finance driven process will usually find that data used as input for decision making becomes less and less reliable over time. As a consequence, firms relying relatively exclusively on financial data for decision making will miss their forecasts and projected guidance.

The solution.

Rather than rely solely on financial data, other data important to creating decision making profiles needs to be collected. Capturing qualitative information will actually make the financial metrics more reliable. Like projects will be compared to each other, and more balanced profiles will be created. A process which recognizes the need to collect such information will also instill more confidence in the planning process. A transparent process which emphasizes the incorporation.



The 15 Pitfalls of Long Range Planning

Common Pitfall #5 – Forecast Folly

The problem.

Firms often fall into the trap of assuming that confidence levels around later years in a forecast should be weighted the same as the following year. Many firms ask their business leaders to make long range forecasts over which they have little or no business visibility. Relying on time value of money adjustments to discount the impact of future years is not sufficient to solve this problem because these adjustments are still being made to point forecast which may have little validity.blog87a

The symptoms.

Firms experiencing folly in forecasting often hear it from their business units. Usually, the feedback comes in very specific comments such as “we don’t really know what will happen to the business beyond a year or two,” or “we didn’t know three years ago what would be happening today, so how could we be predicting so far into the future now.” Another common phenomenon in these situations is the tendency to project “hockey stick” business results, where most of all of the benefits of a project or set of projects occurs toward the end of the forecast period. This gaming behavior is designed to couch projects as viable long-term investments with less short term commitments, usually because business leaders know long term forecasts are really tracked and measure down the road, and/or that they will have the opportunity to revise the forecast in the next annual planning cycle. Finally, many firms experiencing forecast folly will find it necessary to change their forecasts and long range plans frequently and materially.

The impact.

Forecast folly is one of the most insidious to impact a business. It can be one of the hardest to detect because the impact may not be felt for several years. If a firm has consistently relied on a single point forecast in long range decision making, and has done so for many years, especially without a long range tracking mechanism, the company will miss earnings estimates. blog87bWhen “gaming” behavior is encouraged accountability is discouraged and the firm will also lose the ability to course correct.

The solution.

Avoiding forecast follies requires a firm to take several steps. First, uncertainty of forecasts in out years by various business units needs to be recognized and quantified. Calibration of the uncertainty involved in the forecasts should come in the form of specific guidance (i.e. – “here’s how you score the certainty of your forecast. Decisions should be made based on these banded ranges, not on point forecasts. Second, system of a tracking needs to be put in place which memorializes and tracks the evolution of the long range plan. Of course, point forecasts will still be conducted throughout the year, but a typical solution compares plan to forecast to actual. Further, this process needs be carried out over the long term, meaning each year long range plans and forecasts are memorialized and revisited in future years. The point of conducting this analysis will be to discourage gaming behavior and reward business leaders with more accurate long term views of their business.



The 15 Pitfalls of Long Range Planning

Common Pitfall #4 – Organization Usurpation

The problem.

Some firms place too much planning authority in the hands of their finance group. blog86aThese organizations usually have strong finance leaders who tend to speak with authority, often leading decision making discussions. When finance organizations have too much authority within an organization, decisions are typically driven based on data collected in the LRP process.

The symptoms.

Ideally, data collected in the LRP process mirrors the requirements of the decision making process. Even when it does, having finance control the planning process lends a very negative perception to the rest of the organization. Often, business leaders will make statements like “well, we’ve done what we can, I guess the rest is up to you,” or “beyond our data input, the planning process looks like a black box here.” They may also withhold data, often coming with “updated” numbers or providing information well beyond stated deadlines. Often, politically powerful individuals will negotiate for additional investments, quota relief, etc. outside the official planning process. These are sure signs that finance is perceived as having too much influence in the planning process.

The impact.

Organizations which rely too heavily on finance for planning processes will usually adopt funding decisions which do not garner the confidence of various business leaders with the firm. When this circumstance arises, business leaders will often not participate fully in the process, and the entire planning process will ultimately revert to a budgeting exercise. When this circumstance occurs, the opportunity for corporate portfolio management is usually lost. Investment across an organization is not optimized. blog86bOver time a company in these circumstances will find their funding decisions become less and less linked to their corporate strategic goals. Ultimately these companies will revise their investment planning processes.

The solution.

Finance can and should have a crucial role in the planning process. Finance is often the right instigator, collector of data, and source of authoritative information for the planning process. With great authority comes much responsibility, and so the challenge for finance comes in playing these often high profile and important roles without being perceived as a usurper of the planning process. For this reason, a successful planning process organized by finance will stress the importance of collaborative concepts like training, preparation, assistance, consensus building, and transparency.



The 15 Pitfalls of Long Range Planning

Common Pitfall #3 – Manual Manipulation

The problem.

Most planning processes today are driven through manual cycles. Even though historical data is often pulled from ERP systems, the forward looking data which is used for planning purposes is most often manually crafted into templates using Microsoft Excel. These templates are typically populated by planning elements through a company, and consolidated by the corporate function responsible for LRP (often FP&A).

blog85aThe symptoms.

Firms suffering from manual manipulation usually know it, although they may have become so accustomed to it that they don’t realize there is another way. Companies experiencing manual manipulation of accumulated data typically have finance persons working on data consolidation, parsing, and communication late into evenings and weekends during planning cycles. Worse, these finance personnel are often folks who should be spent analyzing data, not consolidating and reissuing it. Companies experiencing problems with manual data processing often find that their long range planning process requires multiple iterative cycles, often lasting several months.

The impact.

Companies with manual manipulation problems typically experience a degradation of their ability to execute during the planning cycle. This is because business owners and the finance community which support them spend most of their time manipulating models and spreadsheets, and consolidating them within their teams. This is usually a time intensive process which takes valuable cycles away from actually running a business. These types of companies are usually analysis-starved because their resources are typically absorbed in data consolidation, leaving little time for critical analysis of data. This means entire FP&A organizations become “big F, little P, no A” in their focus. Thus, these firms decisions are often based more on anecdote and instinct, and less on objective evolution.

For most firms, manual manipulation means using excel to manage a planning process which has long outgrown it. Most firms instinctively want to adopt automation as an alternative, but are often unclear about how to implement automation successfully. Many firms fall into the trap which will be detailed in Common Pitfall #14, using whatever they have at their disposal to attempt to address the problem. Some firms adopt a hybrid approach, choosing to manage only “incremental” investment in through the planning process, while allowing existing business units and functions to plan their existing budgets (given appropriate guidance of course). This approach, while common, in fact is the worst of all possible processes. Incremental investments are almost always associated with existing budgets, resulting in misalignments between investments. Other firms, like the ostrich, bury their heads in the sand, committing to manual manipulation. These firms perpetuate the problems outlined above, and continue to make them worse.

The solution.

Of all pitfalls, the solution to this pitfall is the most obvious, and requires the most dramatic organizational change. Actually automating the long range planning process involves the implementation of a centralized database which all constituents of the LRP process can access. This database functions as a single source of truth (SSOT). This approach does not eliminate the use of Excel, but it does replace the use of Excel as an alignment or consolidation tool. Phasing in an automated solution typically involves the use of Excel templates (since that is what constituents are familiar with) which can be imported into the centralized repository. blog85bOnce the data has been imported into the centralized repository, the interface to that repository usually makes it easier to make revisions and change data within the tool itself. Eventually, the use of the excel template to import data is usually completely replaced in the planning process.

One of the primary advantages of a centralized tool is the capability to expedite alignment. Usually, within matrixed organizations, access controls are put in place which allows various parts of an organization to view information which has been submitted which affects that part of an organization. For example, in some companies a sales function should be able to see sales requirements or budget information from various theaters, but that organization does not necessarily need to see services, engineering, or operations information. However, a theater (like North America) needs to see all information pertaining to it, including sales, marketing, engineering, services, operations, etc. When a centralized repository exists, changes made by the theater or the function are visible to each other, expediting the planning process.

There is always a temptation to fall into what we will describe in Common Pitfall #12 Moving Target later on, allowing infinite regression of the changes by each organization (“there is no end to that”). On the other hand, consider how much more difficult the manual approach makes this problem. Automating the planning process expedites deadlines and facilitates faster alignment, freeing up finance resources to focus on analysis of the alignment rather than spending cycles facilitating alignment.



The 15 Pitfalls of Long Range Planning

Common Pitfall #2 – Class Warfare

The problem.

Everyone is familiar with the common expression “comparing apples to oranges.” The expression is commonly used to communicate the need to compare like objects to each other, and not to compare dissimilar objects to each other. Applied to long range planning, it means to compare common priorities (or projects, or whatever the unit of measure) to each other, and not to attempt to compare dissimilar priorities to each other. Two commonly used illustrations of dissimilar comparisons involve:

priorities associated with innovation and those priorities required to “run the business”

“revenue generating” priorities and “non-revenue generating” (“keep the lights on”) priorities

Most firms tend to be good at separating these priorities into separate classes so they aren’t compared to each other.

What is surprising is that a number of firms still use the same measures to compare those priorities within their “buckets.” The same unit of measure may be used for priorities. For example, the NPV for non-revenue generating priorities may be the calculation used to evaluate those priorities. A different bucket may be used for revenue generating projects, but NPV may still be used to calculate those priorities.blog84a

The symptoms.

When common models and units of measure are used across buckets within an organization, typically two kinds of behavior are observed. The first type of behavior is a “gaming” one – because models may not be applicable to a particular class of priorities, those priorities often interpret the need to complete the data as a liberty to guess, and this guess will generally be more liberal than is warranted. The second type of behavior is a subtle psychological discrimination that often builds in an organization. In the revenue generating, non- revenue generating example, the revenue generating initiatives may refer to non-revenue generating initiatives as “sunk costs” or “burdens” or “organizational taxes.” When the same units of measure are enforced, this type of subtle linguistic discrimination can run rampant. This phenomenon often leads to “class warfare” - the tendency to subtly or psychologically compare the importance of one bucket to another based on the common unit of evaluation.

The impact.

There is a reason that priorities are often properly sorted into difference classes – that’s because there is an inherent recognition that the priorities within one class behave differently than priorities in another class. Using the same unit of evaluation for objects within different classes defeats the purpose of separating them in the first place. There should be more appropriate measures for objects in a different class – if there aren’t, the need to separate out the objects should be reevaluated. To treat the objects in the same way for decision making purposes can lead to improper allocations to certain buckets within an enterprise because there is always a tendency to aggregate the sum of the parts of the bucket. blog84bIronically, this can result in comparison of the sum of the parts on equal footing again. This approach can actually jeopardize an enterprise’s ability to execute on any of the buckets.

The solution.

Recognize that classes of investments usually deserve different measures by which to evaluate them, and resist the urge to attempt to compare on “bucket” to another. In the example of non-revenue generating versus revenue generating initiatives above, forcing non-revenue generating initiatives to calculate NPV is often tantamount to asking them to supply speculative, unreliable, information about the benefits realized. Decisions may need to be made within this group based on their impact or necessity for timely delivery or support of revenue generating projects, for example. Metrics about productivity or efficiency may even be more relevant to this group. EVA may be a more productive measure for revenue generating initiatives than NPV. This approach may require multiple “models” or “templates” for each bucket. The point is to avoid open class warfare between your priorities and buckets by measuring them in appropriate ways.



The 15 Pitfalls of Long Range Planning

Common Pitfall #1 – Information Overload

The problem.

Requesting too much information during the LRP Process is probably the most common pitfall of all. This information tends to be quantitative and is usually requested in an Excel spreadsheet or “template” in the LRP process. Those of us in the finance community are especially good at crafting intricate templates. blog83aWe often want to know things like “how many headcount will be required to execute this program in a certain theater in each quarter for the next five years.”

The symptoms.

Workers in the finance community will often push back directly and vocally. Those persons outside the finance community are often too diplomatic to provide direct feedback but firms with this problem often get questions like “why do you need to know that?” or “what are you going to do with this information?”. Often, the most telling sign that too much information is being requested is that the information simply won’t be provided. A company who finds themselves in this situation, especially when there are too many “blanks” to manage attempting to obtain the information, is almost certainly asking for too much detail.

The impact.

Asking for too much information is the fastest way to derail an LRP Process. When the requested information isn’t collected, usually a firm has one of two choices. One is to complete the missing data at the corporate level using the rationale “well we asked for it, and they didn’t give it to us, and we told them we would fill in the data they didn’t provide. “ This approach is especially tempting when the owners of the LRP process have access to the whole body of LRP data and historical information. However, putting words into the mouth of business owners is dangerous – executives from various functions or business units will disavow the information - “those aren’t my numbers” – undercutting the credibility of LRP in the budgeting and planning process. Another choice firms often make when faced with incomplete information is to revert to a least common denominator approach, essentially simplifying the process to accommodate the data they do receive. blog83bThis approach usually leads to insufficient information upon which to base conclusions – because various parts of the business have “completed” the information in the template in different ways. Either approach, or a hybrid of the two approaches, lead to a body of information which is unreliable for decision making purposes. At best, this problem results in a lack of sufficient information for decision making, at worst, erroneous decisions may be made because the available data isn’t really giving a true picture of the choices facing the enterprise.

The solution.

Almost all LRP processes that suffer from Information Overload need to follow the simple rule, simplify, simplify, simplify, and then when you think you are simple enough, simplify some more. In fact, most simplified LRP templates will usually make the owners of the template feel uncomfortable, because they will always feel that not enough information is being requested. Start by understanding the firm’s decision making processes (business reviews, portfolio reviews, etc.), and the inputs to those decisions. Ask only for that information which is germane to those inputs – only that which is necessary to help formulate the inputs necessary to make those decisions. The answer to the question “what do you plan to do with this information” should be self-evident. When those questions aren’t asked any longer, the firm has reached the “right” level of information request.



The 15 Pitfalls of Long Range Planning
Introducing the 15 Pitfalls

Many companies have some kind of strategic planning which is, in theory, linked to their budgeting for the following fiscal year. In practice, strategic planning is rarely coupled with budgeting. In many companies strategic plans are merely guidelines which serve as context for the budgeting process. blog82aLinking and tracking budgets with strategic plans are the hallmark of successful companies. This paper will help diagnose common issues and suggest potential solutions.

This process goes by different names in different companies. For industries which rely heavily on fixed investments, it is often referred to as “capital planning.” Companies who have little or no capital investment may refer to this exercise simply as “planning,” “strategic planning,” “budgeting,” “annual allocation” or some other term. In this paper, all these processes are referred to as “long range planning” or “LRP.”

LRP typically drives many activities which are crucial to a firm’s future. For example, annual budgeting, forecasting, and even strategic planning and portfolio management often are directly linked to LRP. As important as LRP is to an enterprise’s survival, few companies have efficient, effective LRP processes. In fact, LRP tends to be one of the processes most commonly criticized by both finance and non-finance constituents alike. blog82bWhile there is no “silver bullet” for improving LRP in a corporate environment, there are some immediate steps a firm can take which will set it on a solid path to an improved planning environment.

This series examines some of the common pitfalls associated with the Long Range Planning process. Many are interrelated or associated, others represent different extremes on a spectrum, but each pitfall manifests itself in some unique ways which are easy to identify. Recognizing the existence of these symptoms doesn’t necessarily mean that the problem exists, but the symptoms are warning signs – if they are present, firms should carefully consider the suggested solution. As a general rule, the suggested solutions are ones which will only improve a company’s decision making approach.

In this series, we will follow the same format for each pitfall. First, we will give a brief description of the pitfall, followed by a way to recognize the symptoms of this pitfall in your organization. Next, we will turn our attention to the impact which this particular pitfall can have if unchecked, but we will then offer a solution to help avoid this particular pitfall or head it off.



Finance Analytics Analyzed Concluded
What Analytic Governance Really Is

In part one of this series we looked at what the world means by the term “governance” with respect to analytics. In part two, we introduced a completely different concept – the idea that there were actually two governance concepts with respect to analytics. The first concept is the one which is commonly called “governance” today and is actually what we call “data governance.” This common notion of governance is a well-known issue that most analytic vendors have “solved” now. The second one is called “presentation governance” and it is an entirely new but very important concept. In this conclusion we will explain why presentation governance is in fact the new “governance” in analytics.

Data governance has become the “cost of admission” for analytic packages. It has gone from a “me too” feature to a “must have” feature. At this point, an analytic package which does not have functionality enabling the use of a “single source of truth” (SSOT) will not be considered by any serious enterprise. Initially, vendors who did not support the notion of data governance did not really understand the need of their package to stage and enable an SSOT, explaining to clients that they could simply mandate the use of the same data source by their customers (as if clients never thought of that!).

It seems obvious to us now that this type of solution was inadequate. Customers across companies are unlikely to use the same data source. When they do, the data source may contain multiple tables or multiple time frames. There is common problem of single data sources having sufficient complexity that they are often manipulated. Of course, even when none of these conditions were present, there is still the problem of end users developing different interpretations of the same data, then developing graphics which support that narrative.

blog 81aAll of these conditions still exist even if the issue of data governance is solved. Data governance solutions do not ensure that everyone will display or talk about the data with the same narrative. While getting everyone to use the same data is clearly a step in the right directions, this step does not ensure that people across the company will discuss the data using the same strategic perspectives.

This does not mean that different interpretations of data cannot be valid. In fact, we have often seen them be the basis for very productive discussions. When productive strategic discussions occur, however, they take place around a common graphic representation of the data. Where we have seen strategic discussions derail, they are almost centered around debates about the proper way to look at data, or questions about the validity of a graphic representation itself.

Too often, we have seen very well placed executives spend significant time in strategy meetings discussing analytics where the x or y axis is significantly adjusted beyond zero or where competing analytics applied to the same data result in radically different views. This happens because everyone had an agenda for interpreting data in their favor in a portfolio and the lack of uniformity in analytics makes crafting the narrative first possible. In our last post, we even talked about how the loudest voice in the room or the prettiest picture can sway very important discussions.

blog 81bIt is desirable to head off situations like these. As much as possible, it helps decision makers to agree on analytics so that they can focus on the impact of various strategic options. That is where presentation governance comes in. It is important to redefine the term “governance” to mean that the most meaningful analytics are always applied to the data. True “governance” is built into the Agylytyx Generator. To the extent that a company can control the analytics used across the company, there can be no more discussion or debate about the data or how it is displayed. That means no more chance that the prettiest picture can carry the day. The loudest voice in the room is now irrelevant.

That means companies can now focus on what is really important. Not only is there a single source of truth (SSOT) with respect to data. Analytics should not be controversial, and they should not be the focus of debate and discussion. Real analytic governance means that there is an SSOT with respect to analytics output (presentation) as well, so the quest for truth can be enhanced.



And Now for Something Completely Different
Presentation Governance in Analytics

In part one of this series we introduced the idea that real governance when applied to analytics meant more than ensuring that everyone was using the same data but also ensuring that people were talking about that data in the same way. blog 80aPart two of this series covered the fact that the term “governance” had only been applied to analytics in the past few years, and that it was taken to mean that everyone across a company was accessing the same data (often called using a “single source of truth” which we abbreviated as SSOT).

This application of the term “governance” occurred out of necessity. It was a great leap forward in thinking when the term “governance” was finally applied to analytics. Early analytic engines had no data governance built in, so tech-savvy users were soon downloading analytic engines and applying them to any data source using any hierarchy or other data schema existing in a company. This led to more than one debate about the source of data used in the creation of the graphic output. The better the graphic and the greater influence on strategic decisions, the more important these debates became.

Ultimately the term “governance” was applied to analytic output in the same way it had always been applied to the production of financial information within a company. As we noted in our previous post, this led to the creation of engines built into the analytic software which would allow a company to designate “data custodians” who were able to use the engine to control the data which went into the analytic engine used across a company.

In our experience, this “governance” approach is insufficient. We now break the application of the term “governance” in two parts: 1) what is traditionally meant as “governance” – which we now call “data governance” and 2) a previously underheard of concept which we are calling “presentation governance.”

When we encounter new things, it can be difficult to understand them since we have not heard of them before, and we don’t even know they exist. Fortunately, it is pretty easy to see the problems caused by a lack of presentation governance. We have encountered all of these in various places. In all likelihood we have all encountered one of more these situations:

A “new” or “novel” approach to displaying data captures the imagination of an entire executive team, leading to important strategic decisions being made.

Different analytic approaches to the same data lead to the loudest voice in the room being the owner of the analytic approach from which strategic decisions are made.

A best practice we have seen is for all the executives to agree on the analytic approaches which will be used consistently whenever strategic decisions are to be made. This means that the “constructs” (as we call them) as selected before the data is applied to them. For example, the executive team might decide on a scatter plot format which will depict the share of revenue by channel of distribution by business unit. This means that anyone attempting to use another format in a meeting (such as a bubble chart or trend line chart) will be invalidated in their attempt to discuss the data by virtue of not using the agreed-upon approach.

This best practice avoids a situation we have seen – the use of important strategic meetings to debate the merits of different approaches to strategic data. We have seen entire meeting completely derailed on this topic, when the executives in the room should have spent the same amount of time evaluating the actual results and deciding on a strategy based on those results.

blog 80bThere is an analytic application which allows companies to control the output and format used in addition to controlling the data. We call this kind of control “presentation governance” (as opposed to what we currently call “governance” which we call “data governance”). An application environment which not only allows a company to specify the user output type but also enforces that “presentation governance” through the use of restricted built-in “constructs” is the type of application that meets real requirements for the use of analytics in decision making. That is real governance, and it is something completely different.



Analytic Governance
What the World Means by Analytic Governance

In part One of this Series we introduced the concept of governance within the context of analytics. We talked about how the concept of governance with respect to analytics was important. An important point which was implicit in that piece was the critical assumption that data governance inherently also addresses any governance issues when it came to analytics. It does seem to be a reasonable assumption that since the data being analyzed is governed, the analysis of such data will be subject to such governance by the property of transference.

blog 79aThis erroneous point of view is the central thrust of this post. When the financial community talks about governance today, it is within the context of ensuring there is uniformity around a Single Source of Truth (SSOT). The “state of the art” in governance today goes beyond that – finance communities realize there is importance in developing a common set of definitions as well.

Initial analytic products did not constrain users at all, or even enable basic governance. Even when companies could agree on the same data set to use, different approaches to the data often thwarted attempts at governance. It was possible to use these analytic products to access data and either drop out certain data sets or even to perform analysis based on definitions that resulted in disparate views of performance.

In one large company, for example, the company maintained several different “hierarchies” – ways the offerings were organized. In this case, there were valid reasons for maintaining separate “hierarchies” – one was organizing offerings according to a “market facing” customer perspective, one was organizing offerings according to the way internal strategic and organizational decisions were made. While the two hierarchies used the same SSOT so they were “tying out” from a perspective of totals – the two hierarchies often resulted in materially different measurements for a discrete entity, such as a business unit, geography, channel, etc. The result was that executives representing the interests of a particular entity would often present a very different view of the entities’ performance than the corporate executives did.

blog 79bMost finance communities have now learned to head off these problems by anticipating the need to establish governance around the definitions (“hierarchies” in the example above), so that across the company executives use not just the same data from an SSOT but they also use the same approach to the data. This means that not only is everyone “singing off the same songbook” they are in fact “on the same page” and “singing the same song.”

In fact, a lot of analytic products now have a data governance approach built in. As these data governance approaches have gotten better, they allow companies to create analytics with the same data and definitions. Some products now institutionalize this approach, and allow finance users to certify not just the data but the rules for using that data. The result has been a great leap forward in applying the concept of governance to analytics.

This is the state of the art in the governance approach to analytics today. We think another leap forward awaits us. Because we have not been able to imagine the application of governance to the presentation layer, we have been content not to apply governance to this level. In our next post, we will argue that the previous governance improvements, while they are moves forward, do little to address the overarching need for uniformity in output, and that this is a requirement for true governance.



Analytic Governance
Introducing Real Governance to Analytics

There are almost as many definitions of the word “governance” in corporations as there are credible sources on the matter. A quick online search shows that. Still, we tend to know what procedures are related to corporate governance. It is almost like the famous quote about pornography from Supreme Court Justice Potter Stewart (actually attributed to his clerk) that despite being difficult to define you “know it when you see it.”

blog 78Today governance in a corporate environment makes us think of policies and procedures which help companies adhere to regulations which they must follow and interests they must balance. That may seem vague to most of us, but it is the best way to encompass all the various aspects of things we know as governance. Companies which cast a “wide net” when they consider governance related items are the ones that tend to attract the least unwanted attention in this area. Just about everything can be, and traditionally has been, linked with the concept of governance.

An exception is the use of governance in conjunction with corporate decision making, but this is changing. In the past the term “governance” was underused to describe the area of corporate decision making. Since corporate strategic decisions by definition effect the direction of a company, all the interest present in most companies (from shareholders to employees to communities) have a vested interest in their outcome. This is why deciding what is best for a company’s direction means all interests have to be considered. That is why governance in fact plays such a central role in corporate decisions. This is why as far back as 2007 the cover story for the September issue of Strategic Finance was an article titled “Linking Governance to Strategy: The Role of the Finance Organization.”

Analytics are the heart of most companies’ corporate decision making. In setting strategy, companies frequently rely on the use of analytics to support the rationale behind those moves. Such analytics are often presented at investor conferences and road shows, and are used as the underpinning for scripts in quarterly conference calls. These visual aids have come to pervade our approach to corporate decision making.

If corporate decision making is really a governance issue, and analytics are often the key to corporate decision making, it stands to reason that the very use of analytics should be subject to the same governance concept. As early as 2007 it was becoming clear that governance was central to strategy. To that end anything, such as analytics, which will be an input to that strategy should be subject to the same governance.

Conversely, the lack of governance in an analytic approach risks a lack of governance in corporate decision making. When the production of analytics is unregulated, there is a perceived or even real lack of corporate control over the governance process related to corporate decision making. The result can be a lack of alignment between corporate governance responsibilities and corporate strategy.

This predicament actually got worse before it got better, and analytic products were often largely to blame. In fact, certain analytic approaches have actually added fuel to this fire by fostering an unregulated “wild west” mentality, encouraging anyone who could use a business intelligence product to make analytics which could then affect corporate strategy. The problem was that some of those persons were unintentionally giving incomplete pictures of the data they were representing, or even working with inaccurate data sets to begin with. This mentality obviously circumvented any attempt which could be made to govern the process.

This phenomenon did not go unnoticed, especially by analysts covering this marketplace, and products began to spring up to address the very problem of data governance. In order to ensure that all analytics were using the same data, these products leveled the playing field for input into those analytics. Having everyone across the company use the same “single source of truth” (SSOT) in their analytics was a tremendous leap forward in the formal adoption of a governance process for analytics.

It is now not enough to make sure everyone is using the same data, governance in strategic decision making means that everyone will not only be using the same data, but telling the same story about it. A new governance crisis is looming in the analytic world - even though the issue of data inequality has been addressed, people are still using very different analytic approaches to describe the same data set.

Truly governing analytics means ensuring that everyone across the company will show similar analytics in the same way, so that corporate strategy can be made objectively. Lou Gerstner, the CEO of companies ranging from RJR Nabisco to American Express to IBM, famously required that all documents be formatted in the same way, even down to the font size, so that he would not be biased when leading strategic conversations. The same concept applies to the approach to the governance of analytics. A leading professor from the University of Maryland has made that very point on this blog. Enforcement of that approach will be vital to lasting and successful analytic governance.

This is part one in a four-part blog post series. The next post, entitled “What the World Thinks Analytic Governance Is” will cover the commonly held belief that data governance is sufficient.



Finance Analytics Analyzed
Taking Back the Leadership of the Mature Strategic Analytics Process – Conclusion

Our last five blog posts have been dedicated to understanding how to rollout a finance-led strategic analytic process. We have closely examined the best practices of this process from its very conception through to its impact on the decision making process. To ensure the best possible chance of success for a process such as this one, planning and forethought is required.

Understanding what works and what does not work can help us avoid common pitfalls for finance-led analytic programs. To help us with this analysis we used a phased-based approach called the Finance-Led Process Lifecycle. blog 77aWe introduced this matrix in a previous blog post. To review, this is a typical “2 by 2” matrix. The matrix uses a typical x and y axis for analysis, with the intersection of those two axes forming four quadrants. The X (horizontal) axis is the “degree of completeness” in program design, with one extreme representing program conceptualization. The “y” (vertical) axis is the degree of corporate involvement, ranging from a low of not involved to a high degree of involvement.

Because this matrix was designed to analyze all finance programs, we also talked about each quadrant in the lifecycle and what each one means. We described the lifecycle of any finance-led process. The arrow overlaid on the 2x2 is designed very specifically to plot the track that finance led-processes should take within a company. All finance-led processes essentially go through a lifecycle which starts with the conception of a project. In this first quadrant, it is desirable not to socialize an envisioned process outside the finance community. This is why this quadrant is named “Conception.” In the second quadrant, a finance-led process begins the socialization process. blog 77bWhile it is still relatively low on the degree of completeness scale, it has started a gentle trajectory to the right as it climbs sharply into the second quadrant. That quadrant, named “Collaboration” since input from across the company is solicited in this phase, and the program is still at a low-enough degree of completion to incorporate feedback. The program should become more complete during this phase with the impact from across the company, and finance should switch the intended effect of socialization with others into a support-building method as the program moves into the third quadrant, called “Consensus” for its consensus-building intention. As any program moves toward full design, the finance-led process begins to decline again in terms of cross-company involvement. Notice that the arrow overlaid on the matrix does not “fall” quite a low on the y (vertical) axis of corporate involvement. While finance does take back more control over the process, it continues to involve others across the company in the successful execution of the process – that is why this is called the “coordination” phase.

We applied this matrix to an analytic process, since finance groups are often charged with producing analytics and supporting strategic decision making processes. First, we spoke generally about how a finance-led strategic analytic process could benefit from using this matrix to help plan that process. blog 77cNext, we took a look each one of the four quadrants in the finance-led process matrix, and looked at the specific implications for a process designed to support strategic analytics. In the Conception Stage post, we talked about the importance of taking into account how analytics are used to create strategic influence within a company, and how that can be made easier through self service facilities, the automation of charts, and the necessity of import mechanisms. In the Collaboration Stage post, we talked about the need to carefully plan the scope of the input, and to assess requirements for analytics, security, and data integrity. In the Consensus Stage post we focused a lot on usability. We emphasized the need to think about how other analysts in the company might use the process help them with their existing systems and templates. We also focused on the need to ensure that these analysts can do that and still enable finance to ensure that the output is standardized so that everyone talks about the data in the same way. In our last post we looked at the final phase in the process, the Coordination phase we focused on how to plan a process for maximum support of decision makers, including how to make analytics better, faster, and more consistent.

We were detailed in our analysis for a reason: we have been involved in many analytic processes across many companies of many different sizes and industries and we wanted to provide as much insight as possible. We were so methodical because we felt the information needed to be summarized and organized in order to be actionable. It is very important to remember that these are not merely considerations for a finance-led analytic process at each stage of the lifecycle – these are best practices to be anticipated and even planned for from the very beginning of a strategic analytic exercise. Planning out each one of these sixteen elements (four in each phase) will result in a successful analytic program which supports and improves business decisions on an ongoing basis.



Finance Analytics Analyzed
Taking Back the Leadership of the Mature Strategic Analytics Process – The Coordination Stage

We have been looking at the elements of success required for a finance-led strategic analytic processes. We have closely examined the best practices of this process from its very conception through socialization and consensus building stages. Once this process reaches maturity, it will be time for finance to manifest its ownership over the process and lead the company through the process of preparing these strategic analytics.

Of course, finance ownership has been implicit through the entire lifecycle of the product, but the time for consensus building and input should be clearly ended and the process should be self-contained and ready for 75 This fact is recognized by the placement of the fourth quadrant on the matrix – remember that as a finance-led process reaches maturity, it moves from the consensus building third quadrant into the fourth quadrant, coordination, by “falling” on the corporate involvement axis from high to low. This is because finance is now called to execute this corporate-wide process and consensus building should have already been accomplished.

For finance to successfully maintain leadership of an impactful and productive strategic analytic process, four key things should receive emphasis from the finance team. The first two have to do with the generation of the analytics themselves, the second two have to do with the way the analytics are used. The best practices for finance in each of these four area will largely determine the success of the finance-led analytic process. As we discussed, each of the first three phases are vital to ensure a successful outcome. However, success in these three phases are irrelevant if the process doesn’t yield useful output.

Let’s look at best practices in each of the following coordination areas:

Presentation Automation. Ensuring that presentation building is expedited by analytic processes and technologies.

Output Uniformity. Obtaining consistent and repeatable analytic formats across time periods and across the company.

Responsive Acceleration. Making sure that analytic output can be generated a lot faster than was previously possible.

Decision Support. Verifying that the analytics produced are actually being used by decision makers.

Let’s take a little closer look at each element and how a finance team can help ensure success in this coordination stage.

First, it is vital for a finance team leading a strategic analytic process to ensure that entire presentations can be produced a lot faster than was previously possible. We have seen finance team “can” entire sets of graphs using a spreadsheet such they could simply add columns each quarter to update graphs. Although it worked, it was not without its problems, primarily caused by teams producing inputs to the analytic process in slightly different formats. Still, the amount of time spent was considerably improved by “canning” analytics so that the team could spend a lot more of their time doing things like diagnostics, root cause analysis, predictive analytics, etc. Of course, these types of critical analyses will always require human involvement to create an effective presentation. However, software which can help consolidate and generate analytics quickly will increase the amount of time a finance team can perform this more valuable forensic investigation. A best practice in this phase is to use software which can also be used across the company to speed consolidation and to make analytics readily available.

Second, it is critical to be able to retain consistent control over the analytic output. We have written extensively about why this is so important. In this context it comes down to a simple question of governance. It is common for vendors to talk about governance in the context of data. Finance teams also need to think about it in the context of analytics. We have all heard the adage that “the squeaky wheel gets the grease.” In terms of corporate strategic analytics, it is often the slickest looking analytic that captures the attention of the execute team, even if the data story is being misrepresented by the picture. A common criticism of some analytic packages in this area is that they essentially result in a “Wild West” mentality in a company, meaning that people compete to become the “squeakiest wheel” – i.e. the person with the best story from the data. In the best practice we’ve seen here, the software used across the company generated the same format and colors for all team’s analytics, regardless of the slice of data being visualized. This method ensured that everyone across the company was always consistent.

Third, it is important to ensure that requests can be quickly and expeditiously addressed. At first blush, this may seem similar to the first best practice mentioned. While there are similarities, there are some important differences here as well. We know how important it is that our teams are responsive to requests. If an analytic process does not increase the substance and decrease the time it takes to respond to requests, it may not be perceived to add enough value to justify its existence. Since requests for different views cannot always be anticipated (especially scenarios, the “what happens if we do X” questions) it is important that all possible different views of consolidated data be readily accessible to generate analytics. A best practice we have seen here accommodates applying entire analytic reports (notice that “reports” is plural here) to any slice of data, and where new sets of data can be created with the equivalent of “save as” functionality. We have even seen teams use this approach to do real time modeling with decision makers, although we would not recommend that as a best practice – it is almost always better to demonstrate response time measured in hours. Still, we have found that most finance teams are not able to be responsive enough when it comes to analytics. Of course, to make a strategic analytic process worthwhile to decision makers means increasing response times significantly, whatever that may mean in your company.

Finally, it is necessary for a successful strategic analytic process to support decision making processes. That seems self-evident – the whole reason for a strategic analytic process is to help the business make better decisions. We have often seen that deemphasized or overlooked in the rush to complete a process or make sure folks across the company are participating. In one notable example, a finance team spent months designing a data consolidation and analytic process, only to see decision makers make a “gut feel” call anyway. In this case, decision makers were essentially expressing their lack of confidence in the analytic output by politely ignoring it. This situation could have been avoided to a large degree with the Phase II (Collaboration) and Phase III (Consensus) stages had been successfully executed. As well, if the three elements above are successfully completed, the likelihood of impacting decisions become much greater. A best practice we’ve seen here is to measure the impact of the analytic process on decision making. We have seen teams express this in terms of time involved in decision making, debate times in meetings, and scenarios supported. Whatever the key metrics are for decisions in your company, expressing a “before” and an “after” view will help make analytic coordination successful.



Finance Analytics Analyzed
Looking at Analytics Using the Finance Led Process Lifecycle – The Consensus Stage

This blog post is part IV in our series which applies the Finance Led Process Lifecycle in an effort to see what we can learn.

The third quadrant of the Finance Led Process Lifecycle is the “Consensus” stage. It is called consensus because the finance led analytic process no longer in the formative stages (envisioned) and begins to move into its useful stage. It should now be familiar to all constituents, having been introduced across the company during the last stage (stage two – collaboration).

When deciding on a process to lead strategic analytics across a company, this Quadrant is essential to formally enlisting cross-company participation. When the finance-led process becomes real rather than just planned, it should be an outgrowth of successful collaboration. In this stage, the roles and agreements made in the collaboration stage become real. As they do, there are some vital elements which will make the consensus building stage successful. There are the ones on which we will focus in this blog post.

The consensus phase of a strategic analytic process is about actually implementing the idea and forging agreement for formal participation in the process. Although the process is in use as the analytic program becomes real, it is important to communicate at this point that there are important analytic tweaks necessary in order to prove the processes adaptability. As the program becomes real, forging consensus for the process can be successfully done if four major steps are taken:

blog 75

Template Standardization. Ensuring that all groups use the finance led process in the same way, relying on the same data sources and inputs.

System Compatibility. Making sure that all constituents of the process are able to successfully contribute based on their technologies and processes.

Analyst Usability. Providing constituents of the process with the ability to feed back on their strategic processes in their respective organizations.

Governance Enforcement. Ensuring that constituents of the process, including decision makers, are using the same analytic output and approaches to make consistent decisions.

As the finance led strategic analytic process moves from the collaboration to the consensus stage, there are a couple of key gating factors to keep in mind. In order to make sure that groups across the company live up to the roles they agreed to play in the collaboration phase, it is essential to ensure that they can use the process as advertised. A key to achieving widespread usage is to make sure that the process is easy to engage. In order to achieve that ease-of-use, the process must have standard templates and be compatible with already existing systems.

Templates are almost inevitable in any successful finance led strategic process. When it comes to creating these templates, designing a standard that can be widely used is vital. Two “best practices” are notable here. The first is to use a “least common denominator” approach. This means incorporating only what is necessary to support strategic analytics in the company. The second is to make the template “feel like” something that is already in use in a company. This may be a commonly used type of system in the company. In many companies it is a spreadsheet metaphor. In any case, the best chance to ensure a smooth transition from collaboration to consensus stage is to have an easy to use, engaging template.

Ensuring system compatibility is also a key part of transitioning a finance led strategic analytics process from collaboration into consensus building. Although the term “system compatibility” sounds strictly like technology, it isn’t. Systems can also be processes, namely the strategic processes that are used by pieces of the business, be it a business unit, region, or channel. In large companies, it is not uncommon for the parts of a business to have their own strategic processes as well. As a strategic analytic processes is implemented at a “higher level” (let’s say a corporate level for example), it is important during the transition to a “consensus” stage in the process that these strategic systems, and especially the analytic output they through off, be an input. Ideally, these strategic systems would embrace and use the finance-led process, or at least the analytic library, to ensure maximum compatibility. At a minimum, compatibility at the system level means that the processes are timed appropriately to interlock.

As the finance led strategic analytic process achieves maturity, the question of analyst usability become vital to maintaining consensus building. Some of this should have already been achieved with the measures described above. However, assessing the ability of users across the company to easily participate in the process is both an opportunity to continue to build consensus and represent a last chance to facilitate adoption. Although the process should be well defined by now, a last push to represent inclusiveness before taking control of the process for final implementation is warranted.

Once the finance led strategic analytic process has achieved system compatibility and eased adoption through the creation of standard templates and analyst usability, the consensus stage becomes more about leveraging the consensus that has been built through these processes, and not about building it any longer. As the process begins its transition back into the finance realm as a truly finance led process, there is a qualifying gate necessary to leverage a successful implementation.

The last step, governance enforcement, is about using consensus to ensure that everyone across the company will use the same analytics to talk about business strategy. We have written before about the lack of governance at the output level, and how that can lead to lying with statistics. There is a unique opportunity during the rollout of this process. Once consensus has been adequately built, there is a chance to use that consensus support as a way to get all analysis using not just the same analytics, but the same scale, color, sizes, etc. in a way that ensures that everyone across the company can talk about problems in the same way. It also helps assure consistency in decision making.

The consensus phase of a finance led strategic analytic process, then, is one which builds on the success of the collaboration stage by ensuring participation across the company. As the program begins its transition to the final stage, which we will look at in our next post, the finance led strategic analytic begins to use the consensus support to ensure the ultimate success of the process.



Finance Analytics Analyzed
Looking at Analytics Using the Finance Led Process Lifecycle – The Collaboration Stage

This blog post is part III in our series which applies the Finance Led Process Lifecycle in an effort to see what we can learn.

The second quadrant of the Finance Led Process Lifecycle is the “Collaboration” stage. It is called collaboration because while the finance led analytic process is still in the formative stages (envisioned), it is well formed enough to begin to “socialize” the concept out of the finance organization. This is critical transition point in the finance-led process, since this is the phase during which the process will be subject to criticism. It is also the phase during which an opportunity to lay the groundwork for the next phase in the project by obtaining the kind of buy-in which will be required to build consensus.

When deciding on a process to lead strategic analytics across a company, this Quadrant represents an opportunity to hear and understand requirements from other parts of the organization. Gathering requirements in an understanding and methodical way will help instill confidence in the analytic process and build confidence from organizations outside of finance. Identifying and even sympathizing with concerns in ways that can turn critics into champions is a much more methodical process than most people understand. There are a few simple things to plan for which can help make that happen. There are the ones on which we will focus in this blog post.

The collaboration phase of a strategic analytic process is about determining if the idea is a good one and is in fact one worth rolling out. As soon as a finance team begins to socialize an analytic process that will be corporate wide and is likely to impact corporate decision making, there are some elements to consider in the planning process. The four main potential pitfalls and the real questions to think about during this stage of the process are:

blog 74

Data Integrity. Understanding all data sources currently being used across a company in order to help generate strategic analytics, and reconcile them.

Scope Determination. Establishing the extent of inclusion necessary and desirable in order to establish a strategic analytic process successfully.

Analytic Assessment. Performing a complete inventory of analytics currently used for strategic analysis within an organization.

Security Verification. Documenting security and access levels required by all constituents within the organization.

First, let’s look at the issue of data integrity. This is often the most vexing issue within large companies. Often, companies will find the “hierarchies” used by different groups are not the same. Tying specific data like programs can be difficult or even impossible under these circumstances. In the collaboration phase of introducing a strategic analytic process, transparency to these data difficulties is vital. This kind of openness may be the best hope for achieving a consensus approach to data integrity.

It is important in this phase to avoid trying to ensure complete data integrity across all systems. All the elements that are important in this phase of the project are tied together. Avoid the tendency to overestimate the need to tie data out that doesn’t need to be used in the process (“scope determination” and “analytic assessment”). In other words, remember that for the success of this project, only the data that is needed for users to support the strategic analytic process needs to have data integrity.

Obtaining buy-in from various constituents regarding data sources and output is necessary to ensure data integrity is persistent. Without obtaining agreement from constituents across the company, the issues of data integrity may crop up again later, when it was initially and successfully addressed. This is why transparency is crucial: in order to ensure that the sources of data and their application is used consistently and persistently across the organization.

The issue of data integrity is best established by understanding the analytic requirements across an organization so we will tackle the analytic assessment next. Although in reality these four critical elements in the collaboration phase of a finance-led strategic analytic process should be run simultaneously, if we were doing this ordinally, this would be the first element in the collaboration phase. The reason should be obvious – understanding the set of analytics used across an organization for decision making will help determine the scope of the project and ensure the data with which the project needs to ensure integrity.

Casting a wide net in the collection of strategic analytics across an organization can have beneficial and long lasting effects. A “best practice” for the collaboration stage here is to identify analytic subject matter experts in various parts of the organization in order to solicit sharing of their analytic approaches – and that means not just the analytic but detail around it (how it calculated, used, etc.). Next, creation of an online repository of these analytics which adheres to a similar format and sharing of that repository across the organization is required. In this way, the analytic assessment become an exercise that benefits everyone across the organization. In turn, confidence in the finance-led strategic analytic process is achieved.

Determining the scope of the strategic analytic process is important to avoid over or under reaching. It is important to define very carefully what this process will achieve, the time parameters involved, and the roles and responsibilities of each constituent in the process. Collaborating on “who provides what and to whom, when” is essential. Make sure that the scope of the process is carefully documented in this stage, and make sure constituent sign off on their agreement of the scope.

Finally, it is important in most organizations to ensure the collaboration on security requirements be achieved and understood. In many organizations this is paramount, and become so well-defined that it is already well understood. In the collaboration phase, the opportunity to document any additional security requirements from other parts of the organization is present. If security is already well understood, the collaboration phase represents a chance to demonstrate the importance of security and compliance of the process. Remember, that in this phase, security also means access control. It is important to understand at this point which constituents should be able to see which data and at which points in the process.

Remember that the collaboration stage is the stage where the finance led process is still in idea stage, but begins to be socialized outside the finance community. It is important during this stage to not only gather all requirements from across the organization, but also to communicate back to these organizations that a process is being designed in which they can have confidence and participate.



Finance Analytics Analyzed
Looking at Analytics Using the Finance Led Process Lifecycle – The Conception Stage

This blog post is part II in our series which applies the Finance Led Process Lifecycle in an effort to see what we can learn.

The first quadrant of the Finance Led Process Lifecycle is the “Conception” stage. When deciding on a process to lead strategic analytics across a company, this Quadrant represents the one where the initial decision and planning about the process takes place. As we proceed through a Quadrant by Quadrant analysis of the potential pitfalls an analytic process may face along the way, it is important to remember while planning in advance for all these quadrants of the lifecycle, that even the ideation and planning process contains some potential pitfalls itself. There are the ones on which we will focus in this blog post.

blog 73The conception phase of an analytic process is about determining if the idea is a good one and is in fact one worth rolling out. As soon as a team conceives of an analytic process that will be corporate wide and is likely to impact corporate decision making, there are some elements to consider in the planning process. The four main potential pitfalls and the real questions to think about during this stage of the process are:

Strategic Influence. Understanding how analytics can be regular and compelling way to influence strategic direction.

Chart Automation. Assessing how easy is it to produce analytics repeatably, consistently, and reliably.

Self-Service Facilities. Assessing whether analysts able to create and edit their own analytic views on the fly, and it not, understanding what it will take to get to that state: and finally;

Import Mechanisms. Understanding how data from systems of record is is being incorporated into analytic systems.

First, let’s look at the strategic influence of analytics. In the idea conception stage, the first question to ask is whether or not there is really an appetite/audience for strategic analytics. It is important at this stage not to confuse strategic analytics with analytics in general or what most people might call “kpi’s” or “operational metrics.” If these kind of strategic analytics are currently used in corporate decision making this question become about how they can be produced in a regular and compelling way so that they become systemically embedded in the company’s decision making process. If these kinds of strategic analysis are not being used currently, the question becomes even more difficult. If there is no appetite for this kind of strategic influence using analytics today, designing a process to produce them makes little sense. Creating a desire for the use of the strategic metrics is a prerequisite to building an analytic process in the first place.

Second let’s turn our attention to the idea of chart automation. We have previously described what we meant by chart automation in the context of analytics. Here we mean that entire presentations are essentially shells that can be instantly populated with data or slices of data, so they can be instantly produced with current information from any perspective. Anyone seeking to introduce an analytics process successfully needs to plan for the ability to automate charts.

Next, let’s consider the subject of self-service facilities. Within the context of an analytic process, the term “self-service” as it is traditionally used has become almost meaningless. As popularly defined, “self-service” simply meant that no IT was required in order to create and use analytics. As more and more vendors began to support this requirement by allowing analysts to write their own queries with ease, the term “self-service” became a prerequisite. Quite a bit of perceived flexibility emerged since analyst could do a lot within the established parameters. There was of course a need to call IT in order to widen those parameters. We have expressed what we know to be self-service today to mean that entire presentation templates can be created by end users without ever contacting IT. What this means it that when introducing an analytic process, the entire process has to be a flexible platform that all analysts can engage without contacting IT. Ever.

Finally it behooves those planning an analytic process to ensure the process contains import mechanisms. The ability to choose from existing data sources as a baseline, then to use those existing data sources to create slices within the data set, is critical to any successful analytic process. This means that analysts must be able to import data from any source of truth within their enterprise, quickly and easily. It also means that the ability to suck in the data must be easy and quick so that analyst can perform this task themselves. Designers and planners of a successful analytic system must be able to ensure that analysts will be able to do this themselves.

The key to successful “Conception” (design and planning) when it comes to analytic systems is to anticipate some basic requirements, namely for automated, self-service template creation platforms with data import capabilities.

Next week we’ll start to think about how the analytic process works when the idea is first broached outside a finance team, and how to plan to success in the collaboration stage.



Finance Analytics Analyzed
Looking at Analytics Using the Finance Led Process Lifecycle – An Overview

We can learn a lot about Strategic Analytics circulated by Finance by using the Finance Led Process Lifecycle. In order to increase the effectiveness of any finance- led process which will result in analytics that effect strategy decisions at a company, we are starting a new series of blog posts. These posts will focus on ways which finance teams can significantly improve their analytic input to decision makers.

Finance teams are often tasked with owning this type of strategy process within a company, whether it is done in the guise of “long-range planning,” “budgeting”, or any combination of this and other such process. Any process owned by finance which will ultimately effect the strategic decision making direction of a company, can significantly increase their chances of success by using the Finance Led Process Lifecycle.

Before we embark on this series, it is important to clarify what will and what will not be covered. Operational Analytics, while important in their own right, will not be addressed. The full process of long range planning and budgeting will not be addressed, only their analytic components will be considered. This blog has covered long range planning and budgeting separately in other posts. Strategic Analytics usually means something different than common terms like “Operating Metrics,” “KPIs” or “Balanced Scorecards.” These are the focus of this series.

blog 72This series applies the Finance-Led Process Lifecycle (shown at left) to understand how finance can best rollout, ensure successful adoption of, and successful use of, strategic analytics. To understand this Lifecycle fully, it may help to review the blog post where this matrix was introduced and explained. In short, it tracks the rollout and execution of any finance-led process through its lifecycle, dividing that lifecycle into four distinct quadrants. Using those quadrants to look at issues concerning that finance-led process during that stage can help a finance group prepare for the issues bound to come up during that stage.

In this series, we will apply this Lifecycle to the process of Strategic Analytics as owned by Finance teams. We will assume that the process being adopted is an analytic one, and along the way we may also discover impacts relevant to Finance Led Strategic Processes which introduce analytics, but which may not have that as their focus.

Each blog post will cover a separate quadrant. Along the way, we will be looking at topics of relevance in the Analytic world. We will cover some very salient questions facing finance-led analytic projects today. They include the following topics:

Analyst Usability – how easy is it for analysts to use an analytic system?

Analytic Assessment – which analytics are most widely used in the company today?

Chart Automation – how easy is it to produce analytics repeatably?

Data Integrity – how reliable and consistent is the data which is relied upon to produce analytics?

Decision Support – what role do analytics play in strategic decision making?

Governance Enforcement – are all analysts using the same data when making analytics?

Import Mechanisms – how is data from systems of record being incorporated into analytic systems?

Output Uniformity - are all analyst using the same analytic approaches to similar concepts?

Presentation Automation – are the same or similar presentations made in automated methods today?

Responsive Acceleration – can an analytic process significantly improve decision making times?

Scope Determination – what constitutes and distinguishes analytics which help make strategic decision?

Security Verification – can access to data used for analytics be controlled and limited?

Self-Service Facilities – are analysts able to create and edit their own analytic views on the fly?

System Compatibility – can analytic systems be seamlessly linked with systems of record?

Template Standardization – can analytics be treated as building blocks for entire reports and dashboards?

We have divided these twelve questions into four quadrants. Since this is an ambitious list of topics, we will be taking the next few weeks to get into the “high weeds” on each topic. Each quadrant contains four of the topics, so we will need four more of these blog posts to cover each quadrant. Finally, we will probably make one post which recaps the series. We hope you are as excited about reading these as we are about writing them!



Driver Based Planning Revisited

Combining the best practices identified for Driver Based Planning with our Lifecycle for Finance-Led Processes shows how to optimize such an initiative within your company. blog 71aIf your finance department plans to lead a Driver-Based Planning effort at your firm, designing according to these best practices can help increase your chances for a successful outcome.

Back in November, we wrote about 10 best practices for Driver Based Planning. Since then, Driver Based Planning has become a hot topic within the finance community. A recent publication by the Association for Financial Planning entitled “AFP® GUIDE TO Driver-based Modeling and How it Works” deals with the topic in great detail. One of our principals is quoted extensively in this document.

The document constitutes a very hands-on methodology to actually conducting Driver Based Planning and also contains no fewer than 10 case studies (from very different company sizes and industries) where Driver Based Planning was implemented to varying degrees and with varying levels of success.

The document closes with some best practices which largely mirror the best practices we put forward in our blog post back in November. Thinking about these best practices and combining them with our original list, we came up with a chart that explains the recommendations and puts them in a Lifecycle context. It became obvious to us in doing this work that the best practices which are most useful and broadly applicable are those which deal with political factors related to socialization and acceptance of Driver Based Planning, or Phase I and Phase II of our Finance-Led Process Lifecycle.

In our first application of the Finance-Led Process Lifecycle, we map the best practices for Driver-Based Planning in order to increase chances for success. In order to ensure that a Driver Based Planning initiative has best possible chance to succeed in any company, designing a process which will allow for the inclusion of these best practices at various stages in a process’s lifecycle is vital. blog 71bWhile previous writings have identified some best practices for Driver Based Planning, these have been lists of Best Practices with little explanation given about how to employ them. Thinking about how to employ these best practices in designing the process itself is necessary not just for including those practices, but to use them most effectively.

What follows is a quick quadrant-by-quadrant summary of the best practices needed to support a Driver-Based Planning initiative in any company. To maximize the chances of the success of a Driver-Based Planning effort, it is best to incorporate the necessary time and steps to support these best practices when planning an initiative. For the sake of making the graphic easy to digest, we have shortened each of the best practices identified in the various literature to a couple of words.

Phase One: The “Conception” Stage

In the first stage of Driver Based Planning, the initiative itself moves from being envisioned to being more fully developed and almost ready to expose for input. In this stage, there are a few best practices that are important. For a model to succeed, the team must identify the influencing factors behind the initiative (here called “know motivations”). As well, it is important that resources within the team be identified and cultivated. Chances are good that there are innate skills on the team which can be harassed for this particular job.

blog 71cThere are several best practices which are vital at this stage when it comes to initiating the build of the model itself. One of the most difficult factors facing an initiative like this one is knowing how to get started in building the model. For that reason, a recommended best practice is to start small, focusing development efforts on the easiest, known factors. Another best practice is then to grow the model through the incorporation of scenarios which will enable the model to naturally develop the kind of depth that will prepare it will for the next phase.

Finally, the best practice of embracing robust technology is critical at this stage. This best practice should not be put off until later in the lifecycle, since the choice of technology will become harder and more complicated later on. Building the Driver Based Planning initiative on a solid technology foundation will be vital to the success of the process itself.

Phase Two: The “Collaboration” Stage

During Phase II of the Lifecycle of a Driver-Based Planning initiative, the model and process begins to be exposed to the various constituents necessary to provide input. There are several best practices in the literature for Driver-Based Planning mapped to this phase of a processes lifecycle. During the Phase I design process, it is vital to ensure that adequate time and procedures are built in to support the exercises associated with the best practices in this phase.

blog 71dIt is very important that all requirements for Driver-Based Planning are identified at this stage. Many requirements will have been “baked into” the process during the Phase I design. Since Driver Based Planning is such an intricate process involving so many players at this stage, it is a mistake to think that all requirements will already have been identified. In order to fully appreciate the nuances of the requirements for Driver-Based Planning, it is important to understand requirements. Be wary of trying to react too quickly to expressed requirements. On the other hand, it is important to demonstrate a willingness to incorporate requirements by iterating the Driver Based Planning process quickly. Usually it is easier to listen to requirements and reiterate a Driver Based Planning process than it is to react to the requirement before iterating the process.

During this phase, the identification and cultivation of those “champions” from outside the finance organization – those who will advocate for the Driver Based Planning process – is extremely important. In order to make this happen, it is essential that the finance group is adequately resourced and that they partner closely with the requisite organization (usually operations) to cultivate those sponsors. During this stage, resourcing requirements sessions liberally (usually an ‘all-hands-on-deck’ situation) will demonstrate the group’s commitment to requirements gathering for the Driver Based Planning process. Partnering with operations to run these sessions will help maximize the chance of identifying and cultivating advocates for the Driver Based Planning process together.

Phase Three: The “Consensus” Stage

As the Driver Based Planning process becomes more complete based on a full incorporation of requirements, it begins to move into the consensus-building phase. If the Driver Based Planning process has been designed appropriately and has proceeded through phase I and II successfully, Phase III should be easier and quicker to accomplish. blog 71eStill, there are some best practices for Driver Based Planning which should be incorporated during this phase of the process as well.

First, during this Phase, it is important to begin by demonstrating that all requirements expressed have been met. In this way, advocates identified in the previous phase can help build consensus for the Driver Based Planning process. It is important to have the necessary constituents of the Driver Based Planning process identified so that all necessary stakeholders can be brought on board. It is important in this consensus-building process that they understand and agree what role their teams will play in the process. In some more complicated Driver Based Planning processes, we have seen stakeholder matrixes used in order to facilitate this step.

As this consensus is being built, those members of the team responsible for the actual model can build on their partnership with operations to test the model with historical data. As part of the consensus-building efforts, a model which successfully replicates historical results can help instill confidence for a successful outcome of Driver Based Planning.

Phase Four: Coordination

As a Driver Based Planning process reaches maturity, it is ready to be implemented across the company. While the Phase II and Phase III models require a close partnership with operations and other personnel across the company, it is essential that Phase IV control reverts strongly to finance, since Driver Based Planning is a finance-led activity. Successful execution of a Driver Based Planning process requires that Phase I – III be successful. Phase IV is no time to let off the gas pedal. blog 71fSince the process is ready for prime time, there are a few best practices to employ in order to ensure the Driver Based Planning process runs smoothly.

Remember not to make any more changes to the Driver Based Planning process. If Phase II and III were successful, additional changes should already have been incorporated and the Driver Based Planning process should be firing on all cylinders. Any new changes would be sure to frustrate the process for those who had already agreed to it. Finally, although it is essential to fully incorporate Driver Based Planning into any relevant process, it is a frequent mistake to assume that each process of any importance in the company must somehow be impacted by a successful Driver Based Planning initiative. In fact, success of a Driver Based Planning initiative will often result in others wanting to somehow incorporate that into other initiatives. We have seen Driver Based Planning processes become “victims of their own success” at this stage, being employed for things other than which they were intended. The result is usually damaging to both initiatives. Resist the urge to overextend.

To sum it up, while there is a lot written about best practices in Driver Based Planning, it is helpful to take a methodical approach to planning out such an initiative before undertaking it. Laying out a lifecycle plan which incorporates the best practices for Driver Based Planning at each stage in the process can save a lot of headache down the road.



The Finance Process Lifecycle Quadrants

In our previous post we noted that finance is often called upon to lead corporate processes ranging from strategy to tactics.We noted that most of the processes require input from the larger business community.We also explained how this matrix came into existence.When we started to analyze best practices across finance-led initiatives, a clear picture of the lifecycle emerged naturally.We have introduced this lifecycle and put some terms around it.In our last post we looked at the vertical and horizontal axes as ways of positioning the current state of a process.In this post, we will look at the four quadrants in the lifecycle, put some names around them, and briefly discuss some of the best practices associated with each.Since the arrows indicate the relative path that a process traverses, we will examine the quadrants in order.It is important to realize that even though we discuss the four quadrants separately, the reality of a process lifecycle is that it is much more fluid as it moves from one quadrant to the next.

blog 70Phase One: Conception

This stage is formed by the “low” extremes of the both the horizontal and the vertical axes.This means that this process is only being envisioned, and that the process at this point is limited to finance-only involvement. There are two very important things to note here.The first is that the idea (or something very similar) for the process may have come from some place outside of finance – for example we have seen these ideas come out of executive leadership meetings.The second thing to notice here is that, wherever the idea originated, the process design is being envisioned within finance.A common mistake we see is the tendency to omit or rush through this stage and move directly to cross-company involvement.Skipping this stage or attempting to move through it too quickly is a mistake.There is immense value to a finance team brainstorming amongst itself before consulting external sources.Best practices in this stage include the anticipation of potential contingencies, building in time to respond to unknown/unanticipatable factors, staging any required data, focusing on desired outcomes, and identifying needed capabilities.Perhaps the most important best practice at this stage is ensuring that the process includes sufficient time to execute best practices as it proceeds through the other quadrants of its lifecycle.

Phase Two: Collaboration

This quadrant is the next point of natural evolution in the lifecycle of a finance-led process.It is also representative of processes that aren’t yet complete, but notice that they are moving more dramatically to the “right” in this quadrant.A goal of processes in this stage two of design, then, is to get them much further along the “completion” axis.Since the process has crossed over into the higher degree of corporate involvement, the process is now in the stage where input is being solicited.The process has now been exposed to elements of the company outside finance for the purpose of feedback and redesign.These are some of the best practices from this stage:a high degree of responsiveness, rapid iteration and advocate cultivation. Remember that the primary objective is to identify and incorporate requirements in the process which the finance team may have missed.Just as proper execution of the conception phase is critical to success here, success in this phase is essential to continuing a process along the lifecycle into the third phase.

Phase Three: Consensus

The third quadrant maintains cross company involvement, but the process should move to completion in this stage. As the previous quadrant indicated, the process is still in an active feedback period.As the process crosses into the third quadrant, changes to the process should be becoming more minor and less frequent.During this phase of a project’s lifecycle, finance works largely with the advocates identified in the collaboration stage (those persons whose valuable feedback helped establish requirements) in order to establish confidence in the process among its team members.Best practices for this phase of a process lifecycle include things that are likely to inspire confidence in the process. These include testing the process, reiterating requirements and how the process meets them, and aligning the process with corporate objectives. Successfully proceeding through this phase means moving to the completion of the process design so it is ready to implement in phase four.

Phase Four: Coordination

While the model remains complete, this stage of the lifecycle falls back into finance-only domain.This is the point where we typically get a lot of questions such as:“why would that process seemingly “digress” back into the realm of the finance community?”And “aren’t these “two by two” matrixes supposed to show everything moving up and to the right?”These questions deserve a clear answer.

That answer is critical to understanding why the lifecycle works the way it does. Let’s look at the first question. First, and perhaps most importantly, in Phase Four it is vital that finance exerts leadership over the process.When it is time to actually execute the finance-led process, finance must realize it is called finance-led for a reason.While the process is still involving persons across the company, the need for collaboration on the process and consensus building around the process should be settled (if phases II and III were successful).At this time, it is vital for finance to assume leadership of their process across the company. Second, this is why the lifecycle does not fall back to the extreme of finance-only on the vertical axis (such as the beginning of the initial conception of phase I) but recognizes that at maturity a finance-led process will be truly finance-led.

The second question is purely polemic.We developed the finance-led lifecycle model for a reason.This model reflects reality and matches our best practice development. It was tempting to “jury-rig” the model and the axes in order to represent the lifecycle of a project as up and to the right. Taking that measure would have distorted the model and made it less relevant to the real world.

In our next blog post, we will apply to the model to a popular finance led process, that of driver-based planning.There has been quite a bit written about best practices for driver based planning, and we will examine how those best practices fit in the lifecycle model.We will then be able to determine what a successful driver based planning initiative should incorporate at each stage of the lifecycle.



The Finance Process Lifecycle

Finance is often called upon to lead corporate processes. These processes can run the gamut from strategy input (like long range planning and budgeting) to specific processes (like rolling forecasts). Such processes are rarely insulated within finance and require larger input from the business community. As such, working together with business partners is an important part of the responsibility for the finance organization.

Understanding when and how to engage with these business partners is key to the success of any finance-led, cross-company process. There clearly are times when finance needs to communicate within itself – for example to design the process successfully. When finance does communicate externally with business partners, it is still important for the finance team to communicate within itself too, so that it can put forward a unified front.

We have seen many cases where finance-led process worked well, and some where they were less successful. Many companies have applied best practice analysis to various specific finance-led processes. We aren’t aware of any methodologies which attempt to explain these best practices in a way which will apply to all finance-led initiatives. In doing so, we noticed that, across all finance-led processes we have encountered, these best practices tend to correspond to the (for lack of a better term) “stage” in which a process falls. We began to map these best practices. The result was a very clear picture of a lifecycle.

blog 69In order to understand the evolution of a finance-led process, we are introducing a lifecycle diagram. In order to fully understand this diagram, it is necessary to first understand the elements used to plot the process on both the horizontal axis and the vertical axis. Then, the quadrants which make up the lifecycle can be understood. We will deal with the axes in this post, and save our discussion of the quadrants and resulting lifecycle for the next.

The vertical axis represents the continuum by which a process is exposed within a company. At one extreme (the “lowest” end of the continuum) a process is not communicated outside of the immediate finance team responsible for that process. At the other extreme (the “upper” end of the continuum) the process is fully exposed to all relevant parties across a company.

The horizontal axis plots the degree of completion of a particular process. At one extreme (the “left” end) the process is just envisioned and is not even really defined yet. At the other extreme (the “right”) side of the continuum, the process is fully implemented and in process.

By using these axes together, it is possible to plot the current position of any finance-led process. By charting the evolution of finance-led processes, it is possible to understand and plan their lifecycle. Further, by understanding the best practices relative to any given process at any given stage, it is possible to optimize the prospects for success for any finance-led process.

In our next post, we will look at the four quadrants formed by this two by two matrix. Examining the meaning of each of the quadrants which we have labelled (starting in the lower left hand quadrant and working up, over, and then back down) Conception, Collaboration, Consensus, and Coordination. Understanding the meaning of each quadrant will help understand the stages of a finance-led program. They will also help explain the potential pitfalls facing a finance-led project at each stage of its evolution.



Analyzing Small Clinical Trials

When it comes to clinical trials and data analysis, size matters. By definition, even a “smaller” clinical trial still contains a statistically significant sample size. The challenges in analyzing such a dataset are different from the challenges faced by clinical trials with much great sample sizes. Specific statistical rules usually apply.

blog 68aQuite a bit has been written on the subject already, and there was even a course devoted to the topic in 2012. Even though a lot of this writing is more recent, we think the authority of this subject is over decade old now, a 2001 Symposium Book called Small Clinical Trials: Issues and Challenges published by the National Academy Press which documents a study conducted jointly by the Institute of Medicine, the National Academy of Sciences, the National Academy of Engineering, and the National Research Council. The long list of contributors, editors, and reviewers used in this book are impressive.

blog 68bIt is not uncommon in smaller clinical trials to conduct “rolling trials” which analyze early results as a way to increase focus on certain participant population in later studies. For these trials, it is vital that information be analyzed and statistically significant results be uncovered as early as possible. Using this strategy can often increase the validity, efficacy, and persuasiveness of results. As the book cited above notes (p82) “…combining data from various studies to obtain a common estimate can increase the statistical power for the discovery of treatment efficacy and can increase the precision of the estimate.”

Of course to accomplish this successful analysis, the studies must be successfully and increasingly focused. Both that step, and the final outcome of conjoint data analysis, can best be performed using a highly sophistical data analysis tool. As the book cited above goes onto explain “insight into reasons for the heterogeneity of trial results may often be as important as or even more important than producing aggregate results.” In other words, an application which helps uncover the “why” is useful, whereas a tool which makes charts which summarize the overall outcomes is not.

Certain applications like R and Excel have been commonly used to analyze data from small clinical trials. When applied correctly, these applications can be excellent for summarizing the statistical outcomes of small clinical trials. As forensic mechanisms which help uncover the reasons why particular outcomes may be achieved, they are helpful only as “trial and error” tools which build one or two charts at a time as various “power pivots” are selected.

Uncovering reasons for trends which will help focus future study participant selection and summarize reasons for results (not just results themselves) requires an application built specifically to help with forensic investigations of data. The Agylytyx Generator was built specifically for forensic analysis.

Sometimes data analysis of small clinical trials can be tough, especially when there are a lot of potential variables involved. In many ways it is not an exaggeration to say that uncovering the reasons for trends can make or break the perceived success of the trial. Ultimately, there is no substitute for a human looking for the reasons why a trial outcome is what it is. The right application designed to help humans conduct the critical forensic analysis can make a big difference.



CFO Perspectives on 2015

Every year about this time it is common to look back at the last year to see if we identify some common trends. We also try to determine if these trends are likely to continue into the next year or if they were fleeting. Going through this kind of exercise helps us to identify the areas which should receive our focus in the coming year. As 2015 draws to a close we look back at the trends of the last year using our CFO lens.

Before we launch into these trends, it is important to put these trends in the right context. Every year there is a tendency for many to be overly dramatic as they go through this exercise by saying that this year is unlike any other, or more pivotal, or more important for any number of reasons. We have resisted the urge to summarize previous years (yes, we have been doing this blog for a long time). While we don’t think there is anything magic about the 12 month period we are referring to as 2015, we do think there is something worth noticing in the trends we summarize here.

We do mention these particular trends because they represent a common thread we have noticed among most of our clients. At the same time, we recognize that each case is different. Not all companies have experienced these trends at the same time. They may have started in your company a lot earlier than 2015 and just come to a head this year. They may have started later in the year. Perhaps they have even not begun at all or have already peaked. However, they are significant and common enough to warrant some careful consideration. Since we are not immune from these forces either (especially as they relate to our consulting engagements) we will proceed to write about our observations in the first person as if they happened this year anyway.

First, this is the year we realized the importance of our role.

We started to be invited to more high-level meetings. Executives were reading our emails more carefully and forwarding them more often. Our visibility increased.

blog 67aThe rise of cloud based applications meant people turned to us in meetings to see how we would react. The financial implications, the legal ramifications, and the security characteristics were all items on which we were expected to have an informed opinion. At this same time, this topic was becoming pressing for us as cloud based vendors like Workday started to enter our consideration for back office ERP applications.

The CEO and other executives started asking us to interpret financials more. They expected us to understand business strategy, and provide budgeting options which reflected strategic choices. Further, they expected us to be able to translate financial results into strategic language, explaining how our actual outcomes reflected what we set out to achieve. Even if this involved a bit of revisionist history, we were asked to be miracle workers in this capacity.

Second, this is the year we realized how limited we were.

With our increased profile came the increased pressure to perform, and the realization that our ability to deliver key insights was held back by our own limitations.

blog 67bWe realized that analytics were not the same thing as analysis. We weren’t getting enough strategic insight, so we started to look for ways to generate better and faster analysis. Our leadership insisted that they didn’t need another “fact pack” but were looking to our team to provide key insights and analysis on trends. This perspective came to a head when we realized that the questions were being asked were prompted by executives using our reports, dashboards, etc., and that these static tools were no longer enough to answer the real questions facing the business. So we committed ourselves to finding ways to facilitate our team’s ability to generate these key insights.

Third, this is the year we put buzzwords in context.

Finance organizations thrived on building a mystique around our own language with our own complicated-sounding terms which would make the business perceive we were somehow adding value.

blog 67cTerms like “zero-based budgeting,” “rolling forecasting,” “balanced scorecard” (as if a scorecard should ever be “unbalanced”), “scenario planning” and many more were not new terms. In fact we have been reading about them for years. We have even used some of them in our company. However, these terms seem to have been used and debated a lot this year.

Especially in light of the trends mentioned above, we began to put these terms in the context of our strategic contribution to the business. For example “zero-based budgeting” really meant “wiping the slate clean” in order to accommodate our strategic choices. “Rolling forecasting” really meant “tracking our plan to actual outlook” in order to assess our progress toward our strategic goals. “Balanced scorecard” was replaced with “strategic tracking.” “Scenario planning” became “strategic options.”

For us, the third trend was just a manifestation of the first two. 2015 was the year that we became committed to being able to lead our discussion of the strategies which we possible to achieve and what would be needed to achieve them. We also committed to spend 2016 in pursuit of ways to help us accomplish this objective.



E & Y CFO Digital Divide Survey Recommendations Summary – Actually Solving the Great Divide

A couple of weeks ago, we talked about a recent study of CFO’s by Ernst & Young that had, ostensibly, studied the impact of the CFO, particularly the role a CFO plays and should desire to play, in a company’s “digital” business strategy. We noted the inherent difficulties in identifying a “digital” strategy, particularly for companies that do not have a digital product. We noted how, at its core, the study really was referring to what most of us know as the common organizational misalignment between strategy and execution.

This week, we look at the recommendations from the Ernst & Young study. We really focus on a single recommendation made in that study. That single recommendation, when implemented properly, obviates the need for other solutions. They are also the only recommendations which have a chance to really match up against the strategy-execution misalignment we identified as being as the heart of the survey (as we mentioned in our last post).

The study makes four recommendations, which it calls “digital priorities” for both the CFO and CEO. All four recommendations stem from what the study called the need for CFO’s and CEO’s to communicate more completely, effectively linking up what the company can do with what it wants to do. It almost seems as if there are two threads in the study. In one thread, the study seems to talk quite a bit about the need for CEO’s and CFO’s to work together to make strategy operational, particularly in potentially disruptive trends such as digital business models. The link between those two issues 1) the strategy execution gap (the natural gap between CEO and CFO thinking), and 2) the “digital divide” where future strategic choices reflect more forward-thinking business models is not clear in the survey. Figure One at the left depicts the first issue – the natural gap that exists between CFO and CEO thinking. It also attempts to establish a link to the Digital Divide issue by simply stating reasons the CFO should become more “digital” in their thinking.

At first glance, this generic sounding approach seems to go hand-in-glove with the study recommendations. Of the four main recommendations in the study, two of them - using analytics to measure and predict disruption and creating a governance and risk oversight framework – actually have a chance to create a solid and permanent link between issues #1 and #2 above. If implemented correctly, systemically, and continuously, analytics can embody that framework and apply it to company performance, forecasts, etc. on an ongoing basis.

In our most recent post on this topic, we introduced a graphic which accurately and in more detail, expresses the reasons for the existing gap between finance departments (the CFOs who run them) and corporate strategy. In this post, we illustrate how a correctly configured analytic package which contains built-in risk and governance frameworks operates.

An analytics package which truly has a risk and governance framework built into it functions as a link between #1 and #2 above. There are few analytic packages which give a CFO this kind of control. The Agylytyx Generator functions that way. By allowing companies to build-in their own preferred risk and governance profiles, those elements become “building blocks” in the same way analytic constructs do.

The result, is that persons using the Agylytyx Generator in a company have real-time application of those “constructs” in a user-defined template to whatever data they selected (plan, actual, forecast, budget, etc.) In this way, the Agylytyx Generator is able, through user-applied frameworks, to continuously translate financial forecasts, results, budgets, actuals, plans, etc. into strategy language. The two recommendations that we cover from the Ernst & Young’s Digital Divide survey do not inherently link the CFO/CEO thought dichotomy with “Digital Divide” Issues. If these recommendations are implemented correctly (as Figure 2 shows), the lines of communication become continuous between finance and strategy, and there is then no “Digital Divide.”



E & Y CFO Digital Divide Survey Summary – The Strategy-Execution Gap in Disguise

While we may not agree with everything in the Survey, the 2015 Ernst & Young sponsored study of CFO’s entitled: “High-performing CFO. Driving and enabling the shift to digital. Partnering for performance” is an instructive document, most notably in its recommendations. We will cover the recommendations extensively in the second of two part series. This first part will focus on the results of the study itself, the second part of this series will focus on the recommendations.

The overall context for the study was the strategic role of the CFO, how that role and strengthened over the past three years, and the area(s) in which the CFO was having the least strategic impact. The study found that issues of digital strategy were the ones where CFO’s had the least strategic impact. This was not surprising – the name of the study seems to have been selected after the study and specifically for that reason. This outcome was somewhat surprising since CFO’s often own IT, the organization one would think is most responsible for driving what Ernst & Young calls “digital disruption” in the study summary.

The study documented something that we have been talking about here at Agylytyx for years. The Ernst & Young graphics shown in figure one explains that CFO’s have a greater strategic role than they used to have, but note that significant obstacles remain, mostly due to their traditional finance responsibilities.

The things that are going well in ¾ of companies responding allowed CFOs great influence on corporate strategy. Not surprisingly, the things holding the CFO back from a greater input on strategy were traditional finance activated such as cost cutting. As well, CFO’s continue to suffer from organizational and political boundaries which limited their strategic input.

Early on, we identified the reason these limitations commonly present themselves. We discuss figure two at length in another post, but there are two salient points which put the Ernst & Young study in context. First, current finance organizations led by the CFO still have setting financial goals and targets as the primary task they face. Second, these and other financial outputs serve as context inputs (a kind of feedback loop) to business strategy. In the absence of a continuous translation mechanism, there is nothing which joins corporate strategy and financial results and plans directly.

The next parts of the study are heavy on both anecdotes and statistics which underscore the uncertainty of today’s market and economic climates. The study advocates for a greater understanding of the risks and opportunities posed by such an environment, especially urging CFO’s to be more proactive in assessing the impact of what it calls “the shift to digital.” The study never defines that shift specifically, instead referring to numerous authors and other studies in its footnotes who advocate that this shift is taking place. The case studies cited in these sections, as well as later in the study, seem to discuss companies that 1) are purely or largely digital in the nature of their products or services anyway (like CNBC) or 2) are primarily traditional companies (like the Aviva Insurance Group) which do not define what digital means in those firms. None of the case studies involved actually seem to identify the role which the CFO played in the evolution of a company’s “digital” strategy.

In the final three sections, Ernst & Young begins to focus more on recommendations, so we will largely cover that in part 2 of this series. What was interesting to us is how much the recommendations actually focused on closing the gap between strategy and execution. In fact, the recommendations are sound advice for any firms that have this problem. In the same post that we link to above, we cite McKinsey Research statistics which estimate that 90-95% of companies have this problem. In fact, if one were to remove all “digital” references in the Ernst & Young study, that study would still make perfect sense.

If CFO’s really want to increase their influence on strategy then, regardless of whether there is a “digital” component or not, they will want to follow some of the recommendations in this study. In our next post, we will look very closely at one specific recommendation from the study and how that may be the key to linking CFO’s more closely to corporate strategy.



Best Practices for Driver Based Modeling

In our last post, we looked at what driver-based modeling really is, and when it can be used successfully. In this post, we focus on best practices for building driver based models. We have listed 10 best practices to increase the reliability of driver based modeling. We have divided these in three sections: what to do before you start the modeling process, what to do during the modeling process, and what to do when you have completed the modeling process. Following these steps will dramatically improve the likelihood that driver based planning will be successful at your firm.

Before you start:

1. Choose your time wisely. Plan to spend 90% of your time developing the model and 10% to tweaking scenarios. Map out a proposed timeline for development of your model. Once you build a model in which you have confidence, running scenarios becomes an easy and fast process.

2. Understand requirements. The way you manipulate the model and create output from the model are dependent on decision maker requirements. Ask the decision makers and/or your customers in the business to offer as much detail as they can regarding the relevant scenarios they plan to consider before you start modeling.

3. Build consensus. There is often consensus around the need to do driver based modeling of the business. This kind of scenario planning is something that it is hard to argue is a bad idea. That kind of consensus will help, but real agreement needs to go a lot further than that. Capitalize on this consensus to make sure that there is also agreement that your team is the right one to do this activity and the proposed timeline involved. This step is vital since political “losers” in the scenario are likely to attack the credibility of the exercise.

During the modeling process:

4. Focus on the things you can control. It doesn’t do any good to create business drivers in models which your company can’t do anything about. That doesn’t mean that all inputs have to be controllable, though. Do not confuse variables with drivers. Just because something is a variable in your business model does not mean it is a driver. All drivers are variable by definition, but all variables are not necessarily drivers.

5. Establish redundancy. Have more than one person who understands the model completely. This one seems obvious, but there continue to be cases where only one person created a model and so only one person can support it. In addition to the support there are a lot of good reason to do this.

6. Embrace trial and error. Good models take time to build, and they are iterative. You almost certainly won’t choose all the correct variables and the right sensitivity levels the first time. Allow yourself the time and latitude to make major structural changes in the model. The better understood requirements are, the less time this will take.

7. Check in on requirements frequently. Understanding the requirements may evolve, as you are building the model it makes sense to find out if any changes which may have occurred. The more you can accommodate any changes needed as you are building the model, the easier you will find the validation, vetting, and usage process.

After you have completed:

8. Use historical data to vet the model. If a driver based model really describes a business and the right drivers have been identified with the right settings, it should come close to being able to replicate exact results. This step will also help you build political consensus.

9. Avoid making structural changes. Don’t second guess the model after it is completed. If you have followed best practices listed above, the output won’t lie. Avoid the tendency to make any major structural changes in the model, or you risk derailing your timeline and unravelling the consensus you have worked so hard to build.

10. Don’t overextend. A successful driver based modeling effort will naturally lead people to the conclusion that the same model can and should be used for other purposes as well. Although it may be tempting to make some “small tweaks” in the model in order to use it for other purposes, resist that urge. Instead, consider the steps outlined here and start fresh with a new model built specifically for the purpose requested. There may very well be reusable components from the original model, but the requirements assessment should uncover that fact.



When Driver Based Modeling Could Work (or Not)

A driver based model, simply put, allows users to easily create scenarios based on changing key assumptions about the things that matter to your company. The output of such “what-if” scenarios is usually expressed in financials.

Driver based modeling can be overused – it is not always applicable and can be overextended. In this post and our next post, we provide specific guidance to help understand what driver based modeling really is and how to do it successfully.

To develop such a model, it is necessary to understand 1) what the variables are 2) the way the variables impact each other and the rest of the financials 3) what the difference is between variables and drivers and 4) how the way the variables behave may change over time.

To understand the limitations of driver based modeling, it is important to understand what it isn’t.

Driver based modeling is not synonymous with sensitivity analysis. A successful driver based model must have the sensitivity to variables understood and incorporated. In other words, sensitivity analysis is not an output to the model, it a prerequisite to building the model itself.

Driver based modeling is not an efficient frontier technique. To make an efficient frontier requires creating all possible scenarios. That is not what driver based models output.

Similarly, driver based modeling is not an optimizer. That is also not what driver based models output. The optimal solution is not always executable. Successful driver based models plan scenarios around what is possible to achieve.

The very concept can be overused. Driver based planning is most applicable in long range planning or other resource allocation exercises. As a general rule the more complexity which exists around these decisions the more useful driver based modeling becomes.

It is not applicable as a predictor of business outcomes for a specific time period. For example, driver based modeling is not appropriate for companies to formulate guidance for investors, predict EVA, calculate expected dividend payments, understand likely treasury yields, etc.

Driver based planning can also become too complex to be useful. It is a good idea to limit the number of scenarios under consideration. It is also a good idea to limit the number of drivers which an end user can change.

It is important to understand the difference between variables and drivers. The difference lies in what a business can control and what it can’t control. Drivers should be things which are under the control of a business. They are a subset of variables which can be changed by those seeking to understand the impact of a scenario. Variables which are not drivers are those things which are not under the control of a business but which may fluctuate as well. It is important to separate those variables out into a different “panel” of the driver based model so that they can be changed as needed but held constant across scenarios as needed as well.

A simple illustration helps explain the difference in a variable and a driver. Interest rates may be a variable in a model. They may effect a business, but they should not be a driver because your company cannot control them. Price could be an example of a driver. In some models, the ability to change price assumptions may also have a large impact on a business model, it is within your company’s control, and it may fluctuate enough to be driver.

In this post we have examined some recommendations for employing driver based planning and how to go about it. In our next post, we will examine some best practices for driver based planning.



Why Finance Should Want to Own Strategic Analytics

Our last post noted that the question of who should own data and analytics has been a popular one lately. We noted the fact that several posts on finance related blogs and Linked In groups have focused on this question recently.

In that previous post we also noted the critical distinction between operational and strategic analytics in most firms. We noted that that it was not desirable for finance to take ownership of operational analytics. In this current post we turn our attention to the desirability of finance taking ownership of strategic analytic support.

In many companies, strategic metrics often focus on the same topics on which operating metrics are focused. Three examples from companies of very different industry types should be considered. Many other examples from different industries could be cited, but here are a few:

Professional services companies may make tactical operating decisions regarding bench strength and utilization of resources, but the strategic decisions in the same companies may be based on decisions which will allow the company to maximize these metrics over the long term.

Retail companies may rely heavily on analyzing the day to day patterns of product sales across their website for operational decisions about pricing, ordering, and discounting patterns. These very same companies make strategic decisions about acquisitions and new product investment based on an abstraction of this information.

Manufacturing companies which analyze their production and distribution patterns in order to make short term operating decisions about raw material inputs, inventory, and means of distribution may rely on analytics around margin analysis and channels in order to make long term decisions.

Many companies which struggle with strategic decisions have short term operating decision making processes which are well understood. In fact, if a company does not have short term operating decision making in place, there will be no point in making effective strategic decisions since the company will not be competitive in the first place.

On the other hand, if a company has strong operating decisions without the ability to make equally strong long term strategic decisions, it will become less competitive over time. One study famously suggested that the impact of a failure to make effective strategic decisions over time would result in a 40% reduction in value to shareholders.

There are good reasons why finance involvement in the operating metrics of a company is not desirable or even deleterious. These reasons are documented in our previous post. There is an even better reason why finance should not want to be involved in operating metrics, and it has to do with time allocation.

Even if finance were to be strong partners in operating metric analysis, this is rarely a good use of finance time. All things being equal, finance should be dedicated to the exclusive task of analyzing the operating metrics and performance information in order to help companies better understand their strategic decisions.

Consider the examples provided above for illustration purposes. The operating decisions which are considered crucial to these companies are very important. In fact, these operating decisions are the business of these companies day in and day out. The persons who run the business must be experts in providing a quick analysis of these metrics which will help executives make the daily and weekly decisions effecting these businesses. In each of these examples, strategic decisions will be vital to the long term viability of the company.

More importantly, in each of the examples, participation by finance is vital to the support of well-informed decisions. Only finance will typically have the insight and data required to support critical investment decisions regarding product mix and channel mix. Finance typically is best suited from a skillset perspective to adequately assess the impact of prospective decisions and choices. Finance will have the best view then, of the analytics required to best inform portfolio evolution decisions regarding bottom line margin analysis.

Finance can produce analytics to support granular operating decisions. But the time finance has coupled with its unique position within a company make it best suited to support strategic decisions. Finance should care a lot about owning analytic support.



Why Finance Should Not Want to Own Operating Analytics

The question of who should own data and analytics has been a popular one lately. Several posts on finance related blogs and Linked In groups have focused on this question recently. Judging from the heavier-than-usual volume on these threads, perspectives on the answer to this question are pretty broad.

The most enlightened answers to this question tend to be the most idealistic – they generally focus on the fact that the notion of “ownership” is an outdated one which shouldn’t be relevant. These perspectives are probably correct in a vacuum. The fact remains that in all companies someone must maintain the single source of truth, and one group is usually looked to in order to interpret that single source of truth.

In real-world corporate environments, the answer to this question varies greatly from company to company, so perspectives tend to cluster along industry lines. The tendency to think about data ownership from our own point of view is very human. The man with a hammer thinks everything is a nail.

Across all company sizes, types, and industries, the difference between operating and strategic metrics is a useful one when addressing this question. The dividing line between these types of metrics is not always clear. In most companies operating metrics are actually more important to the firm’s short terms survival than strategic ones.

The difference between operating metrics and strategic metrics should not be confused with importance of decisions in a company. In many companies the same executives involved in strategic decision making are also the same executives who make daily decisions based on operating metrics which will define the way the company does business in the next week or even the next day. The question about ownership of the data and analytic support for these two different types of metrics has to be different teams almost by definition.

Just a few examples of operating metrics in various companies include: customer service, website performance, project management, clinical trials, and industrial machine manufacturing – all these processes produce lots of data and are potentially very important for their respective companies – but they are not the kind of strategic metrics vital to the long-term health of a company. Suggesting it is somehow appropriate for finance to “own” the data and analytics for these metrics is obviously inappropriate.

In fact, in many of these cases, it actually would hurt a company's ability to respond and to do business if finance were to "own" the operating analytic process instead of the relevant business function working directly with executives to understand, evaluate, and support these critical decisions. For a finance team to interject itself into this process at best represents an unacceptable delay – at worst it may actually distort decision making since finance may not have as deep an understanding as those business persons closest to the process.

There is a very important case – strategic decisions - where it is appropriate for finance to “own” data and analytics. We will consider that case in our next post.



Analytic Portals for Customers

Many companies provide or wish they could provide data externally to their clients. We have run into several situations where this is happening or conceptualized at various levels. In one case a company sells data to its customers today but delivers them in a Microsoft Access™ database. In another case a company sells data to its customers today but delivers then as PDF reports in attachments to emails. In yet another case a company has accumulated a lot of data - arguably the most in the world in its particular industry - but has not figured out a way to monetize that information today. In all these cases, the sales and delivery of the information is a lot harder than it should be.

Selling and distributing data to customers over the internet does not have to be difficult. Still there are several challenges that many firms encounter today. One challenge is simple inertia: a company may be so invested in providing data another way that they are resistant to efforts to streamline which they don’t understand. Another may be the perceived logistical complexity – attempting to make that jump today requires cobbling together bits and pieces of different technologies in order to make this happen.

Certainly the idea is appealing. The notion of providing customer interfaces from a single backend infrastructure would be easy if it were systemically possible and reasonably priced. If the backend could be refreshed with automated data updates such that clients could experience those real-time updates, this delivery method would improve the efficiency of data delivery and probably make it more appealing even to those companies who have not been able to determine a way to monetize their data.

The single largest impediment to quickly and easily syndicating our data for internet sales distribution is probably the lack of a turnkey syndication system. Creating a method for storing and updating our data is not easy, but there are content management and database technologies that are pretty strong at that task. There are certainly portal-creation technologies that allow provisioning of “instances” for users and even create browser-based authentications which restrict access to that environment.

A system that allows a company to provision a portal for an end user and also allows for the definition of data access which will be accessible to the end user through that portal is required in order to make syndication easy. Of course, there has not been a system which has combines both the data storage/updating/access control capability and the portal creation and authentication capability. The combination of that functionality is what is required for a company to really be competitive in the data sales and distribution business.

To succeed, this system will require administration by business users, rather than IT users. For example, a consultant attempting to deliver an order to a customer should have the ability to provision, add users, and add data levels, etc. without having to put in a request to IT in order to make that change. To have IT control over these items does not scale – it will make IT become a bottleneck as a company attempts to grow.

There are numerous benefits to creating a quick and easy way to syndicate data for customers online. In addition to being a superior way to create strong customer relationships which will improve retention rates, the addition of online management consulting opportunities and the chance to improve data resale rates create an improvement in transaction sizes and lifetime customer value. Finally, the ease of updating and delivering data through this portal can dramatically reduce the expense required.

The Agylytyx Generator is a turnkey way to create consolidated backend infrastructure which a business person can use to define and create analytic portals of high value for your customers.



Report Automation Means Applying Reports to Anything

We all create reports. We have in the past, and we will again. We use different tools to make that happen. Many of us use Microsoft Excel in order to generate our reports. We may also use some other report writer such as Crystal Reports with or Cognos with TM-1. Some “cloud products” like Host Analytics and Adaptive Insights have standard reports built-in. We may even achieve some degree of report automation – by using repeatable OLAP queries, designing standardizations using PowerPivot, or by saving custom reports in other applications.

Companies used to generate too many reports, but most companies seem to have found a good balance between reporting and analysis. There was a time where many companies were guilt of over-reporting. In fact, one high profile consulting group famously recommended that a group stop making reports and wait for users to notice – then start creating only those reports for which someone asked or noticed was absent. We think the pendulum in most companies has swung back to the middle. Most reports which are generated regularly do seem to support decision making and analysis.

Creating these reports may be normal or even frequent occurrences for many of us, but they are rarely, if ever, much of our jobs. Most of us are called upon to perform ad-hoc analysis as well. It is an expectation that we will generate certain reports, but we typically have other significant responsibilities, usually related to this ad hoc analysis. The faster and more effective we are at creating reports, the more time we will have for these other tasks, which typically involve elements of analysis also. We often wish we could simply apply these reports to a particular data set which we are analyzing.

There may be any number of reasons we can’t apply a report to a data set we are using for our ad hoc analysis. If we use PowerPivot, the data may not be in a format which we’ve already standardized. If we are using system generated reports, the information may be coming from a different system than the one in which we’ve built our reports. Often the report format may need customization in order to analytically support the data set at which we are looking – many times it might be easier to actually build a new report from scratch than to repurpose an old one. The net is that our standard report formats don’t always lend themselves well to ad hoc analysis.

Real report automation means being able to apply any report formats we have created to any data sets so that we can hasten our ad hoc analysis. When we are asked a question which requires us to build a model, some charts, a presentation, scenarios (or any combination of these) we are usually using custom built data sets. There is a reason why we save reports and why people think they are valuable – they usually contain some key information we use to make decisions. We should be able to quickly and easily apply any report to any slice of data (any scenario we’ve created, any model we’re built) to help us in our ad hoc analysis. If your team can’t do this today, they aren’t using the Agylytyx Generator and they should be. Contact us today for a free demonstration of how reports can be applied to scenarios or models.



Achieving Financial Governance through Access Control

Controversy was ignited by our last blog post. That post argued that retaining Access Control as a business user was a much better Governance strategy than “outsourcing” grants of access to an IT department. In a Linked In group dedicated to the finance community comments on this perspective reflected positions ranging from “finance time is better spent doing other things” to “finance does manage access controls at our company.” This concluding blog post on the subject illustrates how easy it is for finance to retain access control and ensure corporate governance. This post focuses on the way that happens – how finance departments can meet governance requirements by practically expecting to manage grants on behalf of all business users.

blog18aThe issue of access controls and governance is not a new one. It did surprise us how many people actually considered governance implications when choosing to let their finance departments handle access control grants themselves. It surprised us even more that these stories came in from all over the world, and from different size organizations. One user told us her team would make Hyperion grants themselves, another user mentioned controlling access grants using Host Analytics, still another referred to skills in his finance team in Windows Active Directory and OLAP.

The granting of access to authorized users is nothing new. Sometimes these types of grants get pretty complex pretty quickly. For example, some companies will have client representative that would like to be able to see all data pertaining to a particular client account including costs data and expense data, blog 18bfor all products and services, for all geographies. In a very complex series of data grants, a company might grant one user all the data pertinent to revenue for a single product or service across regions and channels; another user might get access to revenue information for all products and services in all regions, but only for a particular channel of distribution. Others might have the same types of grants, but for expense information. In rare cases, a user like General Manager of a region might have access to all revenue, cost, and expense information for that particular region only. Many products, including the ones listed above, can handle such grants.

The issues of various levels of access control, especially grants that “stripe across” other data sets was relatively new. The Hyperion user referred to previously seemed surprised to hear that this was possible. In order to effectively create grants which cross lines such as the ones mentioned in the previous paragraph, a product must support the creation of dynamic datasets, and link its access control strategy to the creation of that dataset. In the illustration provided here, a dataset must be created which represents all the costs, expenses, and revenues related to this client regardless of distribution channel, region of the world, or product and services ordered. Next, access to that dataset must be provided to the individual in the company who “represents” that customer. The Agylytyx Generator is the only product we know which makes it easy to create those datasets and dynamically assign users access to them. Since all this can be done by the business user, compliance with access control rules is assured.



Access Control is really about Financial Governance

“Access Control” – even the words make it sound important. There is a good reason for that. The most basic notion of “access control” – basically “who can see what” – is extremely important. Very often there are critical issues of confidentiality involved. No firm wants their client lists exposed. There are very specific legal guidelines protecting data access to employee information. There are Sarbanes-Oxley Act (“SOX”) issues carefully defining timing around the release of financial information. In the best case scenario, messing up access control can put a firm’s reputation at risk. In the worst case scenario, messing up access control can result in fines and even jail time. As if this was not enough incentive, there is another reason access control is important and it has do with corporate governance. In part one of this two part series, we will look at what access control has to do with governance. In part two, we will focus on available access control approaches which address the governance problem as well.

“Access control” has often been the purview of IT. When a user needs access to certain information, the manager or executive who decides to authorize this grant typically completes an online form or sends an email to the appropriate contact with IT who makes the necessary grant authorization. Almost by definition, there is a potential governance problem here because decision makers in departments like corporate finance are dependent on their IT partners for access control. “Governance” in this context means that corporate finance controls “who can see what.” In this case, they do not.

For true governance to exist, access control must be in the hands of business users like corporate finance. No matter how automated the process may be, if corporate finance does not have direct control over the assignment of roles and access, the conditions of governance do not exist. Auditors are usually okay with the fact that IT can assign finance users access to systems – if they don’t actually have the ability to see the data themselves, they don’t “count” as users with access to the system. We don’t typically think about the fact that IT users then have the ability to “grant” themselves access to the system – we just count on them not to do so. There are myriad other potential problems with this scenario that actually happen. They may be infrequent but among the ones we have seen: 1) a user was inadvertently granted access to systems because their corporate email address was one letter different from the intended user and the manager made a typo; 2) an employee switched roles in a company and should no longer have been able to access sensitive financial data but the manager forgot to notify IT to deauthorize access; 3) the request to change access controls was made but the IT person who handles the access control was on extended PTO and wasn’t able to address the request for a couple of months.

The fact that governance actually involves someone outside business users like corporate finance matters. The examples cited in the paragraph above are human errors which can happen when IT is not involved also. This is all the more reason that access control should remain in the realm of the business user. First, when an “extra” person is involved, it increases the likelihood of this kind of occurrence. Second, when a “mistake” occurs, the fact that “solution” is out of the hands of corporate finance is not in compliance with most governance requirements.

When it comes to sensitive information, particularly in the realm of corporate finance, access control is really a governance issue. Too much is at stake to cede access control responsibility to any other organization. Fortunately, there are solutions. In the second part of this series, we will look at how access control can remain in the realm of a business department like corporate finance.



You Might Benefit from a Construct Library

constructs blogA Construct Library is a must for most companies. chart blogWe created them for clients before our software even existed.In fact, a long time ago before we started our company many of us created Construct Libraries within large companies.This kind of Construct Library is good to have. An application which builds-in the Construct Library and uses it to automate chart building is a powerful idea.

Without an application to use it, a Construct Library can serve as a reference for ways to visualize data – a kind of repository of data visualization best practices. When creating a chart, table, or graph, the idea is that this online reference library can be accessed by folks across a company to help “short-cut” the chart type selection process. Further, the examples in such a Construct Library can help expedite the selection of a “chart type,” since they will use a “Chart Type” in order to show the data.

Further, an externally referenceable “Construct Library” can be used to expedite the assembly of templates. not template blogIn the same way that Construct Libraries make it easier to create charts since there is a point of reference, that same process can be used multiple times in order to create a “template” of sorts manually.

As much as a Construct Library can help create expedite the creation of a single template, it is not a substitute for a template creation platform.In the sense that the template is as manually created collection of objects, it is not a real template in the traditional sense of the word. Rather, it represents the manual assembly of chart types.

When a Construct Library is used within an application, the nature of a template changes. When an application treats Constructs as “building blocks” to be used in the creation of reports, dashboards, or scorecards, entire templates are created at once.

This is the real power of a Construct Library. Sure, they are great tools for any organization to have at their disposal. Harnessing the real power of a Construct Library into an application means realizing the true potential of Constructs, so that whole “templates” can be created at once. If you don’t have a Construct Library, you are missing out. If you have one, but don’t have an application which makes use of them, you are missing an opportunity.



What a “Template Creation Platform for Analytics” Is

The phrase “Template Creation Platform for Analytics” is a mouthful. It sounds technical and intimidating when said all at once. Even for those who understand the full meaning of every word the implications of the phrase are difficult to process. However, parsing each word into digestible bites and then understanding the phrase in context makes it very easy to understand. Even though there may not be anything to use as a frame of reference, no point of comparison, it becomes a lot easier to understand exactly what the Agylytyx Generator does.

When we used to describe the Agylytyx Generator as a “Template Creation Platform for Analytics,” we would get a lot of glazed-over looks. That was probably because people weren’t really used to thinking in those terms, and all they would hear were what they perceived as buzzwords. To some extent, we know that is still the case. When confronted with entirely new developments we haven’t heard of before or didn’t know existed, our first tendency is to discount things we don’t understand. The more use cases the company accumulates the more understandable this new approach becomes.

It is usually not the first the word “Template” where we lose people. Everyone knows what a “Template” is – or at least they think they do. For a full explanation of where the standard notions are insufficient, please read “When a Template is Not a Template” for a technical explanation of where our standard notion of what a Template is falls short. At least people have heard the word and may even have used it. In rare cases they may have even created a template before.

Those persons who have created a template don’t usually get lost on the second word either. They are usually the ones hanging in there with “Template Creation…” In fact, even many people who have never actually created a template are still with us at this point. It is not a difficult concept to grasp that a template must have been created for it to exist.

We often start to loose people at “Platform.” Many people are unfamiliar with the word when applied to technology. Those who are familiar with that word in technology are often used to hearing it in the context of an infrastructure provider to another vendor to use in the delivery of their product or service (for example PaaS or “platform as a service”).blog blocks The concept of a “Template Creation Platform” is too much for most people in the sense that conjures up images of vendors using a product to create templates in order to repackage and sell those templates are part of their product. We get that and do concede that it is a bit confusing.

But the Agylytyx Generator is designed for the end user, not for other vendors. When we use the term “Platform” we clearly don’t mean it as in the same context of “platform as a service.” In fact we put the platform directly in the hands of the end-user. What kind of platform is that? A template creation platform of course. That implication is intentional, and it is why we use the metaphor of “building blocks.”

The “building blocks” are the things that are the final word in the product description. That final word in the product description explains what kind of “Templates” are being “Created” using the “Platform” –“Analytic” ones. Using the analytic building blocks, users create templates from those building blocks. The Agylytyx Generator is a platform which users access in order to do that themselves.

Breaking down each word, it is possible to appreciate the fact that a “Template Creation Platform for Analytics” exists even though it constitutes something for which there is no analog.



What “Data to Charts in One Click” Really Means

It sounds catchy. Who wouldn’t want to be able to do that? There is an appeal to anything which only takes “one click” or for one to be a “click away” from anything. In fact, just about any vendor can (and many do) make similar claims, since technically the final “click” required by a user constitutes “one click” if one starts counting then. Anything can count as one click particularly if a user 1) considers a chart and many charts the same thing; 2) doesn’t count the previous steps required; or 3) considers a “canned” report format the same as dynamic template creation.

First, there is a big difference between one charts and multiple charts. In a previous post (Filtering and Pivoting or Making Templates?) we showed the many steps required to use filters to change a single chart. Using a graphic generation package from a leading vendor, the picture that vendor uses to explain the approach to data portrays a chart with seven different filters which may be adjusted in order to change a chart. That approach makes an interesting case study – setting multiple filters results in multiple “clicks” to change a single chart. Of course, when the chart changes, the previous chart is lost (unless the user is able to remember the filter setting used to create the chart. On the other hand, changing a lot of charts by simply pointing at a different data set, or clicking back to restore that set of charts sounds like a lot better approach.

Second, there are always previous steps required. BI Vendors typically make their product look easy by leaving out a lot of the previous steps. One factor we didn’t even mention in the post referenced above is the amount of preparation work necessary to create the filter alignment in the first place. For products on the market today, things have not gone much beyond the Microsoft Excel metaphor for creating charts – picking rows and columns of data, choosing chart types, playing with attributes, axis, formatting, etc. Usually through a process of trial and error (selecting different sets of data for example), a user can arrive at a single chart. Today’s BI products have either made that process marginally easier, or have offloaded it onto IT to program.

Third, so called “dynamic” templates really aren’t. A few products have created canned dashboards, scorecards, or reports to which filters can be applied. These product follow the same process: define the format to be viewed by the consumer, define the “attributes” (“filters,” “pivots”) to be applied/changed by the end user, and then map the data fields to the proper part of the report format. The outcome is a dashboard, scorecard, or report that can be repopulated/redrawn by the user simply by changing the filters. Because they are filterable, these formats are called “dynamic.” They are not really dynamic, they are static because the format itself cannot be changed without reprogramming. A superior, truly dynamic option is one which, in addition to the “filtering” capability mentioned above, gives a user the capability to create and to edit as many dashboards/scorecards/reports as they want. For more information on this critical difference, read “When is a Template Not a Template”).

A true “Data to Charts in One Click” solution means a few things. First, it means that there is no user specification involved - no pivoting, no filtering, no selecting of data elements, no choosing of attributes, no selection of chart types, etc. Second, it means that minimal (or ideally) no data preparation is required. Third, it means that the output is truly dynamic, not predefined.



Governed Data Discovery Should Mean No Lying with Statistics

A lot of vendors are writing about their approach to “Governed Data Discovery.” All vendors are approaching the concept of governance the same way today – ensuring there is uniform source control over the underlying data used by a BI application, so that all the data ties in all analytics. For real governance, that is not enough. Real governance means that in addition, all the analytics are presented using company approved and controlled formats.

The term “governed data discovery” is relatively new one, and most people credit the invention of the term to a single source. According to the Gartner Group’s February 2014 Magic Quadrant for Business Intelligence and Analytic Platforms, “Data discovery capabilities are dominating new purchasing requirements, even for larger deployments, as alternatives to traditional BI tools. But ‘governed data discovery’ — the ability to meet the dual demands of enterprise IT and business users — remains a challenge unmet by any one vendor.”

Gartner got it right – they invented the term out of necessity, based on customer requirements. As one senior executive at a Fortune 100 company asked us recently “how do you keep people from monkeying with the data?” When pressed for specifics, the executive revealed a very common practice in their company (and likely most others) – folks were commonly eliminating certain deals or data points as “outliers” when they prepared their analytics. Governed Data Discovery means enterprise IT controls data so the requirement for governance is met in the sense that there is a single source of truth for all analytics.

This governance leaves companies are better off than before - at least they can be sure users are accessing the same data. There is some control point beyond the corporate edict that “all users must use a certain data source” which is practically daring users to find other sources of data. In that sense, governance is effective.

For real “governance” to be achieved, data control simply isn’t enough. Governance means that a company can control not just backend data applications through enterprise IT, it means that the business users of the application will all be using the same “building blocks” to create their analytics. For example, this means if something like “evolution of contribution margin by product by region” is produced, the same chart type and even the same colors will be used by users across the company. It does not mean one user can use a bubble chart, another uses a trend line chart, another uses a scatter diagram, statisticsand yet another uses a tornado chart. Even elements as basic as colors can affect our understanding of data.

In a perverse corporate edition of a “beauty contest,” the best looking chart often provokes the most discussion whether or not it is the most compelling way to present the data. Real “governance” means users are not spending their time trying to create the most impressive version of a chart, or that meetings are not derailed by discussions rooted in the latest eye-catching graphic.

There was a book published over fifty years called “How to Lie with Statistics.” In this book, the author famously documents how graphic images can misrepresent underlying data. Even if a user does not intend to misrepresent facts, they can still unintentionally lead viewers to incorrect conclusions. The book never alleges that users are accessing incorrect or erroneous data. It assumes the data is valid, but documents myriad ways that end users can and do mislead readers using that underlying data. The point applies to the term “governed data discovery” in a very important way.

The point is this: even when data completely ties out in analytics, without control over the output, companies still have no effective governance. Even when users are all accessing the same underlying data, if companies have no control over the way the data is presented, there is no effective governance. True “governed data discovery” means that companies enforce a uniform presentation method as well.



The Difference Between Horizontal and Vertical Drill Down

“Drill down” on a chart is a frequently heard term. It has become so commonly used that most analysts who cover business intelligence often make it a category all its own which they have dubbed “drillability.” Today, a critical distinction is introduced. The drillability used by today’s applications actually employs a vertical drilling technique, and a new and better way of drilling exists – “horizontal drilling.”

We do not need a different term to define things until we need it. A new metaphor needs a new term to define it. Residential piping provides a good basic example. Until plastic piping was invented, there was no “metal” piping or “PVC” piping. There was just “piping” - one didn’t have to say “plastic” because that wasn’t introduced yet. When it was, the distinction had to be made. Eventually when we talk about the piping in new houses, it became clear we were talking about plastic piping since it is now in standard use. In the same way, we need to make a distinction between “vertical” and “horizontal” drilling.

“Vertical drilling” is what we know of as “drilling” today. So far the common use of the term “drillabliity” refers to the way all applications handle the exploration of data, and it is an appropriate term to describe the act of “clicking” on a chart element to display what is “behind it.” The term means essentially the same thing to everyone – clicking on a chart will lead us to “the next level” of data, so that the chart becomes a gateway into data exploration. This method of exploration has become so appealing that it has become the metaphor for data exploration.

trends by regionThere are some inherent problems with vertical drilling. Vertical drilling often constrains what we can view. Consider the example provided to the right – a basic trendline chart (although the format and chart elements don’t really matter here). In this example, let’s suppose we decide we would like some more information on what looks to be revenue acceleration in Asia, so we decide to drill into the chart. Two things happen under the current metaphor. The first is that we must decide which “point” on the chart to click. Next we will be presented with additional information about that quarter, rather than exploring the trendline as a whole. The second problem is that the information we are presented will only be a single part of the whole. In this example, if we click on Asia Q4, is the next thing we expect to see the Q4 revenue for the entire Asia region by product? It is Q4 revenue for each of the countries in Asia? Is it Q4 revenue for each of the channels of distribution we use in Asia? Rather than help our investigation, we are likely headed down the kind of “rat hole” that vertical drilling frequently leads us.

Horizontal drilling is a different experience entirely. In horizontal drilling, we choose which charts we wish to see to drill into our data, even to point of viewing all the pieces of the whole at once. trends by productRather than clicking on the chart somewhere to hope what we see next will assist us in our investigation, we take control of our investigation with the same mouseclicks in order to display what we want instead. In the example above we decided that we need to understand the factors influencing rapid revenue acceleration in Asia. Choosing to horizontally drill by choosing “Asia” would lead us from revenue trends to every possible factor effecting revenue in Asia. So instead of seeing revenue decomposed for one point in time for a single factor (like Q4 product revenue in Asia, for example), we would immediately see multiple charts instead (examples on the left).

Horizontal drilling enables us to select any element of a graphic and decompose (“explode”) that element into multiple variables. In this example, we have used horizontal drilling to decompose the revenue trend for Asia into multiple charts which depict the various trends which might assist our investigation. In this case, we can immediately deduce from our visual review that that the growth rate for product 4 (from the first chart displayed) and particularly the dramatic growth of the reseller channel throughout the region (chart three displayed here), warrant further horizontal investigation. The only chart which doesn’t help us here is a decomposition of countries (shown in the second chart) since all the Asian countries appear to be growing at roughly equivalent rates.

Keep in mind that the vertical drilling experience would still have us looking at a single point in time for the “next” layer of data (probably countries). Adjusting that chart view to create a trendline chart would leave us looking at the second chart depicted here. We would then need to start all over with our investigation, vertically drilling next an element of our choosing (say, products) in order to produce and then drill into chart one above.

It doesn’t take much from this simple example to see how much time and effort we save in our investigation by using horizontal drilling, not to mention the dramatic increase in the likelihood that we can find the answers in our our data.

Horizontal drilling, like PVC piping or any such innovation, will take us some time to understand. Eventually, this type of drilling will make vertical drilling obsolete. Horizontal drill down will become the standard for drilling.



Redefining Data Discovery Part III – What a new approach to data discovery means

In Parts I and II of this series, we advocated the use of a new approach to business intelligence, especially in the world of corporate FP&A. This week, we conclude this series with three use cases where this different approach can be applied in the real world. These case studies may look familiar, because they describe very common situations which happen in most large companies today. Making these processes a lot easier and more effective is what results from a redefined approach to data discovery.

Use case #1 – New Product Introduction

The situation:
A cross functional team at a Fortune 100 company was managing the introduction of a new interactive TV product line of over 100 SKUS through eight existing retail channels. There were many complex considerations, such as price, discounts, various materials cost inputs, etc. in order to obtain gross margins. Gross margins were the key lens used by management concerning continued investment in the newly-created business unit.

The approach without Agylytyx:
The team used excel to create a P&L forecast based on many assumptions which resulted in the need to review multiple scenarios in order to optimize the go to market model. Ultimately the team was limited by their use of the spreadsheet in terms of number of scenarios that could be captured and maintained with ease and data integrity. This data would then be massaged into charts and graphs for analysis by the team and reporting to executive management.

How the team would do it with the Agylytyx Generator:
Since scenario modeling is built in, changing assumptions about price, discounts, material input, etc. means simply copying and editing an unlimited amount of datasets (as easy as a “save-as” in a spreadsheet) in the Agylytyx Generator. Since the output in Agylytyx is already presented graphically, no additional chart building would have been necessary. The product’s “side-by-side” capability means these graphical scenarios could have been immediately and meaningfully assessed. In this case, the team would have been able to assess the impact of changes in one variable like material costs on the whole range of other variables like discounts, price, sales forecasts, etc. and ultimately the impact on gross margins also.

Use Case #2 –Analyzing and Reporting on Portfolio Complexity

The situation:
A team at a Fortune 500 medical device company was confronted with a situation where over 100 products roll into about 10 major product families. These products are sold in each region of the world, there were 6 major regions defined by the company. The team was tasked with understanding which products were most likely to be successful and in which markets in the future in order to advise the leader of the business where to make investments. Just a few of the key factors in the team’s analysis were the average unit selling price trends, the actual sales trends (on both a unit and a dollar basis), and the regional popularity of a product or product family.

The team’s approach before Agylytyx:
The team found that their current system of record, SAP, could do provide analytical capabilities quickly enough. The team was pulling information from SAP and using that information to build a file in Microsoft Excel which contained all the information. The team was using Powerpivot in Excel to analyze data and produce charts which they would put into a PowerPoint file. Creating and analyzing the information on these products and product families for each region using each of the factors mentioned above proved impossible. Instead the team was using their “gut instinct” to attempt to find business insights and creating a chart at a time using excel and PowerPoint.

What the Agylytyx Generator did for the team:
Templates were created for the entire portfolio, for a product family perspective, and for the product perspective. Analytics based on key factors such as the ones above (and regions) were added to each template. Datasets were created for the entire portfolio, each product family, and each product. Simply by applying a template such as the product family template to any product family, the user could immediately see all the multiple factors for the product family. Users could switch product families, or even switch to a product template and view all of the key factors for that product. Best of all, any charts could be immediately exported to a pdf or PowerPoint. The product ended up saving users “a ton” of time in reporting and analysis.

Use Case #3 – Long Range Planning and Annual Budgeting

The situation:
A software company had four existing business divisions – one of these came from a recently acquired company. The company was attempting to accommodate what it saw as the movement to the adoption of cloud based offerings among its customer base, but was also afraid of cannibalizing its well established existing enterprise software license business. During the company’s long range planning process by which annual budgets for departments were formulated, executives at the company developed several strategic options. Evaluating these options, choosing one, formulating the budget to manifest that option, and presenting the rationale for that choice to the board, all needed to be done within few weeks.

The team’s approach before Agylytyx:
The team read books, articles, magazines, and whitepapers in order to search for the best way to portray data. The team then pulled information from Hyperion Planning into Microsoft Excel in order to create multiple charts which would represent the various options. The team would then create a presentation in PowerPoint based on their observations from various charts.

What Agylytyx did for the team:
The team was able to select a uniform set of building block constructs which they then could apply to any of the strategic options they wished to evaluate. First, the team choose things that the executive team should care about as they evaluated their options, so that they could consider the impact on factors like sales, margins, headcount by department, risk, regional profiles, etc., using the same building blocks for each strategic option. For that reason, they were able to “gain actionable insight into their strategic plan” and “link their operating budget with the strategic plan.”



Redefining Data Discovery Part II – What a new approach to data discovery means

After taking a “special request” from a Linked In FP&A Group, we now turn our attention back to what amounts to a quantum leap forward in data discovery. In this post, we will begin to get specific about what a new approach to analytics means.

For many BI use cases, it is more efficient and effective to model the backend source data to fit a comprehensive set of pre-developed visual templates (e.g. dashboards) rather than to embrace the alternative - empowering users to generate multiple dashboards instantly by creating custom report templates and manipulating a data model in order to populate those templates.  These benefits are most apparent when multiple complex business scenarios requiring advanced visualizations from different stakeholder viewpoints need to be analyzed.  In fact, if your job consists of publishing the same report or dashboard each month or quarter and embraces no ad hoc analysis or field specific queries based on those dashboard or reports, you may not need to look for an alternate approach.

Those of us find ourselves building strategic presentations can realize a 10:1 reduction in analytical processing time and a marked improvement in business insights from this approach, creating better decisions and improved business performance.

The business problem that so many of us face are limitations of using existing applications for visualizing complex portfolios when creating analytic presentations for key decision makers. Time constraints and governance requirements mean teams often present inferior, incomplete, erroneous, or inaccurate analytics when attempting to support analysis and decision making.

For these reasons, we have created an alternative approach which we use in our application, the Agylytyx Generator. The Agylytyx Generator includes data preparation methods for creating a unified DataMart, self-service tools to create datasets for analysis (e.g. scenarios), and an extensive library of visualization objects that represent building blocks which can be combined by users into logical “frameworks.” A very unique capability is that users create their own framework, and each framework can analyze multiple datasets.  With a single click, a whole set of graphs and charts will be populated by data from a new dataset, or multiple datasets. Using the product’s unique comparison function, users can apply any framework to multiple datasets, and the output can be compared side-by-side. With other spreadsheet visualization and dashboard tools, each chart or graph would need to be re-created for each dataset. Given time constraints and governance considerations, the visualization “metaphor” used by other products is inferior to the Agylytyx Generator approach.

In the final post in this series, we will examine some common use cases based on the application of this approach. We will close this post with a chart which represents a “deep dive” on the difference in functionality between the approach which other applications use when attempting to solve this problem, and contrast that approach with the approach that we have built into the Agylytyx Generator application.


Item Traditional Approach Agylytyx Generator Approach
Data Model – Dashboard relationship
Inside-out – model the backend source data then modify the template to meet it Outside-in – hold the template constant and manipulate the data model to populate it
Vertical drilling – click on a chart in order to explore the data behind it Horizontal drilling – click on a different dataset in order to explore the data behind a whole series of charts at once
Chart building
Custom Chart building – user builds their own charts, one at a time. These charts then have to be organized for analysis and may take several iterations before creating something useful to interpret. Custom Templates – user builds their own templates of charts from wireframes with their data flows already directed enabling instantaneous analysis.
Voluntary Governance – user relies on business guidance to comply with presentation guidelines Systemic Governance – users use building blocks which comply to corporate guidelines
Presentation Preparation
Chart Copying – user copies and pastes created charts into presentation software Template export – user exports pre-built presentation deck



Ten Best Practices for Data Analysis

Since we wrote the blog post “10 Signs Your Data Analysis is Inefficient” it was suggested that we write a follow-up post indicating some ways you can tell if your process is efficient also. Incidentally, the suggestion came to us in a very active Linked In group called “FP&A Club” which you may want to check out if you are not already a member. In any case, we felt that suggestion warranted a digression from the series on Data Discovery that we just started in order to write about this topic.

Here are ten ways you can tell if your data analysis is really firing on all cylinders:

1. Your team has changed their data discovery metaphor.

Since we are in the midst of a series on this topic now, we will not labor on this point too much. We will note that a few companies have successfully changed the way they look at data discovery – from the traditional method of “vertical” drill down to a much faster method of “horizontal” drill down.

We will say more about this in a future post. Horizontal drill down is usually enabled when the second best practice is present.

2. Your team’s charts are built for them.

The time we spend adjusting filters and changing chart types is better spent looking through sets of charts built from different drill down perspectives. For example, a dashboard which is composed of several analytics (sales, tam, market share, revenue, gross margin) about a product might not be meaningful, but it may put us on the right track. Playing with various combinations like regions, or channels of distribution, will eventually (we hope) uncover a key insight. An approach which has all these charts already built-in will allow users to simply view all the charts and look through them for key insights, rather than relying on the user to “build” charts by playing with filters, hoping to come on the “right” discoveries.

3. Your team can create and edit entire templates within minutes.

Let’s face it, templates take time to create. Despite having access to preformatted templates (which are even available on the web), customizing a template can be tedious and time consuming. A best practice instead is to leverage a template creation platform where users can focus on analysis rather than on customizing charts.

4. Your team has developed very strong writing skills.

Sounds basic right? Read on. As many have famously discovered, it’s a lot easier to write a little than to write a lot. The shorter and more impactful you can make bullets, the more effective they will be. Too often we leave this as an afterthought in the analytical process. Teams that are successful are invariably very good at writing analytics bullets. It is not something that we can automate – no application can do it for us – at some point there is an inevitable need for human interpretation of the analytics. Of course there are exceptions to all rules, but some notable best practices for these bullets include the fact that they are usually:

Positioned properly – Often bullets are placed next to an analytic so as to best interpret it and not leave anything to the reader.

Written concisely – A single bullet usually fits on a single line with 18point font.

Edited well – Adjectives, adverbs and articles are often left out.

Presented consistently – the same structure of the bullets for each item is essential to avoid “cognitive dissonance” – in order words if you start a bullet with a verb, they need to all start with a verb – throughout the presentation. Also, make sure punctuation is consistent – for example don’t end some bullets with a period and others without it.

Precise linguistically – This may be a bit more of an art than a science, but exciting sounding words like “significantly” are generally less informative than “18% Y/Y growth” for example.

How often have we heard “the charts speak for themselves” or “just slap some analysis on these slides and send them out”?

5. Your team saves time for analysis of the analytics.

We get that a lot of important questions are time sensitive. When facing a lot of important strategy questions coming in from different quarters, we may often have a tendency to complete a presentation so that we can email it off and move on to creating the next presentation deck. As difficult as this may sound to do, it is always better to set expectations appropriately about timing for these responses, so that you can build in the necessary time for analytics. Some of the best practices above (and some mentioned below) focus on ways to free up time for analysis. The teams who are the very best at this spend at least as much time analyzing their charts as they do creating them.

6. Your team has automated the creation of entire presentations.

Unfortunately, one of the most time consuming areas of analytic communications is taking screenshots or copying and pasting from another application into PowerPoint. The most efficient teams have automated the creation of entire presentation decks, so that any of the analytic output (no matter how many charts) they build is exported en masse to their PowerPoint application. Teams that have this capability also don’t fret updates, edits and changes, because they simply re-export rather than going through the copy/paste process again.

7. Your team already knows about “best practices” for data display.

The very best teams don’t need to play around with chart formats for optimal display of financial data. They have already developed chart formats with which they are comfortable, based on the long history of finance persons thinking about the best way to display data and a knowledge of what their leaders are comfortable with seeing. For this reason, your team doesn’t need to play around with chart types – they use prebuilt charts that already use the format they like.

8. Your team is open-minded about data display. This may seem like a contradiction with the point above. It isn’t. The point is that your team pays attention to the best practices out there. They are always looking for the next great innovation and aren’t above taking some guidance on when to use, and when not use, certain views.

9. Your team collaborates on analytics from the beginning of the process. This one is much easier said than done, but the organizations that do it properly save a lot of time. A natural human inclination is to want to show a final draft of a product to get feedback. In the situation we’re describing, it means circulating a presentation deck amongst the team to provide feedback. This is a form of collaboration, but it may be the form of collaboration which requires greater time cycles (reformulating charts and updating presentations).

The most effective teams have developed ways to work collaboratively from the start of the process. Instead of working “in silos” on different problems which they will later share with each other, truly efficient teams share analytical responsibilities on each of the problems. They have applications and processes to support this collaboration, and produce presentation decks together which need very little final editing.

10. Your team makes minimal (or no) data errors.

One of the hardest things in a set of analytics in a presentation is to ensure they are consistent in their data, and that the data will be accepted by stakeholders as accurate. When a team has to introduce any extra steps in the process, the potential for manual human error goes way up. Ultimately, we all know that executives will zero in on any data that is incorrect or inconsistent. Even teams who double and triple check all their formula, data, and charts, can still make mistakes. The most efficient teams don’t make these errors at all. They have analytic capabilities like the ones described here which are built into (or at least layered on top of) existing systems. By leveraging the ability to analyze data and export their final presentations to PowerPoint directly from the system, these teams avoid the mistakes that can be so costly to a team’s credibility.



Redefining Data Discovery – An Overview

The term "data discovery" is a fairly recent linguistic construct. The idea has captured the imagination of most of us. It already has its own Wikipedia entry, analysts have quantified it, and vendors have invested millions to align their brand with it.

The old method of data discovery was to play with data and chart types in Microsoft Excel. Some products have emerged which make the selection of a chart type and the data to configure it faster and easier. Consequently, end users often use products like Qlikview or Tableau which facilitate the process of data discovery. In many cases, users ask IT departments to build interactive drillable dashboards or preconfigured reports.

In these situations, analysts are using the metaphor for data discovery that has been the modus operandi of data scientists before the term "data discovery" even existed. Even using Excel, an end user would conduct essentially forensic business exercises by either 1) drawing charts until something stood out or 2) try to avoid having IT create a report by trying to can entire static presentations and arranging them so they automatically repopulate. Many applications have gotten pretty good at being able to help users draw a chart quickly using filters. They have even figured out a way to make customizing a data set possible.

It is time for a quantum leap in the metaphor we all use for "data discovery." We think it should mean that lots of charts get built right away based on a built-in Construct Library so analysts immediately start reviewing data pictorially to discover the insights based on the application.

We will elaborate more on these points in our next few posts.



Filtering and Pivoting or Making Templates

We all know the drill – someone important wants to know the answer to a question, so we make a presentation deck full of analytics that address the issue. Pull the data from various sources, merge it together in Excel, and then paste the charts into PowerPoint, adding analysis to the individual slide elements.

33aSpreadsheets do offer lot flexibility when it comes to chart making. They can change chart types easily – so easily a user can choose a kind of chart that doesn't even show up because the data selected does not support that chart type. They can link to data sets so updates can build charts automatically – but if scale on the data is different or the format of the data is slightly different – the chart breaks and has to be fixed. When this flexibility takes the form of a pivot table, the charting support a forensic exercise of looking at different data sets – but in this format there tends to be less flexibility since users cannot "save" charts to compare with charts in other pivots – as soon as a new pivot is created the first chart breaks.

Some products have recently attempted to solve some of these issues through the use of "filters" or ways to slice data into a chart. This method takes the place of the traditional pivot chart metaphor of dragging measure and attribute combinations onto a canvas to make a chart. The promise of the filter is that it is easier to slice and dice data into a chart format which is argued to be more helpful than pivots when conducting a forensic exercise on a piece of data.

33bWhen it comes right down to it, "filters" are an improvement on the standard "pivot" to create charts. They still use the same paradigm of looking at data, as the diagram here shows. Even though some the "attributes" like Region are filterable, the basic chart itself is still set up through the use of pivots (i.e. the highlighted lists on the side of the diagram shown here. Users go through the same forensic exercise as they do with pivots, and they are still manually building one chart at a time. And when the filters are changed, the chart does change but the previous pivot is lost. Creating an entire presentation deck still requires building one chart at a time whether one is using pivots or the slightly improved "filters."

A completely different paradigm for creating those ubiquitous presentation decks
The ideal situation is one in which users create entire presentation decks of dozens of analytics while being able to compare those analytics side by side, without ever building a chart.

No losing charts.

No broken links.

No guessing what combinations of elements makes a good chart.

No building of one chart at a time.

33cFiltering and pivoting are time wasters. When it comes to making presentation decks out of any dataset, there is no substitute for making templates.












The Crucial Difference Between Creating and Comparing Scenarios

In a strategic business environment creating scenarios is hard. Comparing scenarios is even harder – a lot harder.

Sure, creating and comparing simple scenarios is easy. On an academic level, determining mathematical outcomes and sensitivities given a certain set of inputs can be a straightforward exercise. In a capital environment, this kind of objectivity often exists. Even though these kind of scenario models can be complex, they are usually pretty straightforward to build. For example, building a model which answers the question, "what happens to our investment holdings if interest rates (fall or decline) to X" is usually a straightforward exercise.

Scenario creation gets more complicated when variables are not always known. These kinds of situations often exist in business strategy. For example questions such as "what happens to the market if competitor X and competitor Y merge?" or "what happens if we gear our investments more toward emerging markets" are less straightforward.

That is why creating scenarios is hard work. Since most scenario creation techniques start by identifying drivers of models, and since those drivers are not readily apparent, scenario creation experts will look for useful "proxies" – known pieces of information that should serve as reasonable substitutes for the unknown drivers – which will help them build models.

The one piece of good news in scenario creating is that there are ample tools available for this purpose. There are specific software applications which are used in various industries in order to help build scenarios. Microsoft Excel is very strong as a generic tool for scenario creation. Excel functions like goal seek help build sensitivity analysis, and the built-in Scenario Manager helps develop and keep track of scenarios that have been created.

Many of us have known some very strong scenario modelers. Using proxies and variables, they can create models which can be used to build very descriptive scenarios. These scenario modelers often take full advantage of the tools at their disposal.

All the skill in the world at scenario building will not help compare the output of scenarios. It turns out that there are few people who do this well, and even fewer tools available for comparing scenario output.

blog23Let's say that an executive, with the support of the scenario planner, has finally painted two different pictures. For example, let's say two scenarios support the same level of investment in the following year. One scenario depicts what is likely to happen under a "business-as-usual" scenario, and the other scenario depicts what is likely to happen if that same investment level is tilted slightly toward spending on emerging markets.

How can the executive choose make a strategic choice between the two scenarios? What application can be used to compare the scenarios? Trying to use the output from the existing scenario builder application means toggling back and forth, looking at one set of numbers at a time, or choosing information to print out and try to place side by side.

This is the exact reason why we have taken the approach we have with the Agylytyx Generator. The Agylytyx Generator is not a modeling tool, it does not build scenarios. There are enough tools like that in the marketplace, and as we've seen, Microsoft Excel is pretty good at that. The Agylytyx Generator allows users to quickly and easily create graphic side by side comparisons of scenarios, using perspectives built by end users themselves. Even better, with a click of the mouse, The Agylytyx Generator applies yet another perspective on the scenarios. For example, one might be the CFO point of view, one might be the VP of Sales point of view.

Scenario building is hard enough. Scenario comparison shouldn't be. Graphically understanding a business strategy can help understand questions where staring at numbers may not help. "How much risk am I taking?" "What will be the short term and long term effects on net margin" or "What will be the effect of my marketshare and ability to compete?" are questions that the Agylytyx Generator can help address. That is what we mean by the tagline "Strategy Visualized." It could just as easily be, in this case "Scenarios Visualized."



5 Ways to Help Sell a New Approach to Business Analytics Internally

It is not unusual for a business constituent to identify better solutions to their existing problems only to have IT represent that existing tools can or will be applied to that problem. This roadblock is even more common when prospective solutions are SaaS/Cloud based solutions.

There are various reasons why this happens so often. Next we look at some of the motivations for IT throwing up these obstacles. These include:

"Not-invented-here" syndrome. This is the idea that anything not created or sponsored by the IT department is not a good option. Particularly when a toolset has recently been purchased in the same perceived space, this problem can be relatively acute.

"Rising-Expectations." This is the idea that a recently purchased or deployed solution can or will directly address the business problem identified, but in fact business users finds that it either partially or completely leaves the problem unaddressed. No one wants to admit that they have adopted a solution which leaves business problems unanswered.

"The Unknown Quantity" factor. When IT is presented with an approach to a business problem which they have not encountered, the approach can be perceived as negative. Ostensibly this can occur because IT doesn't know the impact of having this approach become part of the fabric of the business. In reality, this reaction is motivated by human nature. No one wants to admit that they haven't seen a particular approach before.

As daunting as some of the hurdles can be, in reality some best practices for engaging IT can help overcome them. Some "best practices" can help head off or even overcome these challenges. Some of these best practices appear below:

Engage IT early and often. The best way to make IT feel like a partner in any effort is to involve IT in any efforts to identify a solution. In one recent software enterprise implementation in support of finance, IT was involved with the production of the RFP. They were invited to all vendor meetings – IT was involved from the beginning as a key partner.

Produce a Business Requirement Document (BRD). Informally expressing product requirements in meetings, hallway conversations, even emails, are too easily dismissed. A resistant IT organization can ignore gaps in meeting business requirements too easily. Documenting business requirements leaves no room to claim a business challenge is being fully addressed when it isn't. This approach has the benefit of creating an objective definition of requirements which everyone can agree upon. It also has the advantage of improving the way business users elucidate requirements. There is something about being forced to write things down that brings out the best in our communications.

Build an effective business case. Different organizations use different approaches to justify adoption of solutions – some use NPV, some use Payback, some use IRR, some use ROI, etc. In any case, demonstrating how quickly solutions pay for themselves and generate returns for the business often prove difficult to ignore.

Identify ways the suggested approach maximizes the value of existing tools. Positioning a new approach as highly complementary to existing applications in use by the organization can help considerably. Explaining how the prospective solutions can be delivered within the interface of an existing application and provide increased value to that solution can help, because it can increase the value of solutions which have already been adopted. Showing how IT can "take credit" for the impact of adoption by improving the business case for existing applications often works.

Minimize the impact on IT. Using a cloud based solution often can minimize the impact on IT. This approach should be used carefully and depends an organization. In some cases, IT can feel involved with a prospective solution, particularly when data integration is required for a cloud solution to work, IT can be minimally involved and take credit for the implementation without much resource utilization. In other cases, IT may not need to be involved at all, or even know the application is in use. If a solution is cloud-based, is fully self-service, and can be delivered at an appropriate price point, the solution can even fly "under the radar." At an appropriate time, this type of implementation can even help directly make the case to IT why the solution should be integrated.

Business constituents should never accept roadblocks from IT. If a solution is worth adopting, there are plenty of ways to get them on board.



When a Template is not a Template

In the last blog post, we looked at the way the Agylytyx Generator can be used to create perspectives of various constituents across the business. In this post, we look at how those perspectives really represent what are real and useful templates. By way of contract, we will more closely examine what we have come to expect templates to be, and how existing products reinforce those perceptions.

The word "templates" means different things in different contexts. Marketing folks may use the word to create consistency in the form of a document or presentation that is prepared in different parts of the company. In the context of reports or analytic packages, they have come to be associated with the mold, pattern, or model, for a particular approach.

Vendors in the business analytic space have come to approach templates like a check box – almost as if to say "yeah we have that too." For example, vendors commonly represent that they have an "out of the box portfolio management template" or a "ready to use gross margin module." What they really mean is – "yes we have some charts we have collected that portray gross margin information" but when it comes time to actually use those charts (or reports, or dashboard, or whatever the "template" concerns), we invariably find out that the template must the "customized" to include our specific information. For example, support may tell us that it is necessary to "associate part id's with the unit of measure for each report" and that "defining regions in this template is as easy as choosing the proper attributes and mapping them to the correct field id's so that the reports will populate correctly." These are actual situations, by the way.

Following these steps in order to "customize a template" often requires an IT-like level of system knowledge. It is little wonder that data analysts spend so much time on technical manipulations of data and usually have little time for valuable strategic analysis. Often, "customizing a template" takes as long as it would take to build all the charts in the template in the first place. So what good are "templates" anyway? When a user needs to choose attributes and units of measure in order to populate the template, the so-called template is not really a template at all.

The Agylytyx Generator is a real solution to templates. Rather than approach the issue of "templates" as a one-size-fits-all tool, the Agylytyx Generator is a template creation platform. The Agylytyx Generator application eliminates the need for end-users to define units of measure, map data fields, or choose the order of attributes. The only thing end users customize is templates, not charts. The Agylytyx Generator builds the charts through the application of the templates.

So when is a template not really a template? When an end user has to customize charts in a template. When is a template a real template? When the end user has complete control over the creation and editing of the template.



Defining Business Intelligence

Business intelligence is not as intelligent as we may think it is. This has a lot to do with the way that we have come to think about the whole "category" of applications called "Business Intelligence." A quick linguistic analysis is insightful. Sometimes a term or phrase finds its way into our business vernacular. That term's value may have relevance before it find its way into common use, but it is almost always impacted by it.

The common pattern is this:

1) No one has heard the term before, so the use of term probably indicates advanced knowledge, but its use is a way of recognizing others who understand it;
2) The term is expropriated by many who wish to appear in the "know" – these people may use the term in the proper context, even if they don't always understand all the implications;
3) Everyone has heard the term already, so no one is impressed with its casual use, even if it is in the proper context, which it frequently is not;
4) The term has become so overused that people go out of their way to use other ways to explain it in other ways;
5) The term is so obsolete that it isn't even recognized anymore.

Consider the following examples.

1. If a concept has merit, the evolution of the term corresponds to the concept's evolution. For example, the ability of a local computer to access applications running on a server was known as:

"time sharing" in the 1970's;

"client-server" in the 1980's;

"application service provider" (ASP) in the 1990's;

"software as a service (SaaS) in the 2000's;

"cloud" from 2010.

2. Sometimes terms are not time dependent, but they may reveal a lot about the maturity of an organization. Think about the idea of evaluating an investment. Common terms which have been used over the years include:

Payback – the time at which at investment pays for itself

Internal Rate of Return (IRR) – the amount of benefit generated by an investment compared to its cost

Return on Investment (ROI) – the return generated by an investment for each time period

Net Present Value (NPV) – the total return generated by an investment adjusted for the time value of money

Economic Value Added™ (EVA) – the total returned generated by an investment adjusted for the time value of money with balance sheet factors included

Now, think about the concept of "Business Intelligence" (BI). The concept of business intelligence was originated as early as the 1950's. The implications for finance have evolved over time, but some of the terms commonly heard are:


Balanced Scorecard

Metrics Review

There is a dilemma here. For real-time information to populate anything, IT for Finance is usually involved. For financial analysts to produce dashboards, balanced scorecards, or metrics readouts that make sense each quarter, the required information and related queries change frequently. Keeping up with the latest terminology can be almost as important as producing relevant information. There is a very strong argument that the concept of "Business Intelligence" has actually devolved to pattern four or pattern five (see above). In the finance community, "Business Intelligence" is almost synonymous with the terms "dashboard" or "balanced scorecard" or "metrics review" or any other such terms. IT organizations that support finance departments frequently deride these activities. Even when they are successfully implemented, their value as decision making tools tends to be much less than originally claimed.

How can "business intelligence" be redefined in a compelling and lasting terminology? The answer to that question may lie in the original intent of the term. Wikipedia quotes a famous author who in 2009 defined business intelligence as:

"A set of theories, methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information for business purposes. BI can handle large amounts of information to help identify and develop new opportunities. Making use of new opportunities and implementing an effective strategy can provide a competitive market advantage and long-term stability."

The definition makes sense and may be a compelling, lasting one. Many recoil at the very notion of "business intelligence," yet those same people would endorse the definition. After all, who wouldn't want to find actionable data that becomes useful "for business purposes" and helps implement an "effective strategy"? Unfortunately, many business intelligence solutions do not meet the objectives set out in the very definition.

In order to stay relevant, today's finance professional needs to come to grips with existing approaches to BI. A Pattern I and Pattern II approach to BI will rise above discussions of dashboards, scorecards, and reports. A finance executive seeking to improve relevance and strategic contribution would do well to revert to the original definition of business intelligence and redefine it as "Strategy Visualized."



Executive Dashboards, The Moving Target, and
Watering Down Business Intelligence

The marketplace is full of noise about dashboard and balanced scorecards. Checking the box called "dashboards" is a requirement for vendors from project management to business intelligence to manufacturing, planning, and ERP vendors. "Balanced scorecards", "built-in dashboards", "out-of-the-box dashboards", "customizable dashboards", "interactive dashboards", "drillable dashboards", and lots of other buzzwords have found their way into common vernacular. After all, what executive wants to admit that he or she doesn't have "immediate visibility" into the "key metrics" effecting their business?

There are certainly some valid dashboard applications – for example for a PMO. But there are significant limitations to the use of dashboard at the executive decision making level in large enterprises.

At this level, these approaches rarely have longevity. In this environment rarely (if ever) have I seen a dashboard obtain critical mass, much less sustain that momentum beyond a quarter. Many times, executives don't even log into dashboard in the first place. Usually, these executives know that if an issue exists it will be surfaced to their attention. As a recent manager once told me about his VP checking dashboards "I don't think he's logged in to check a number himself in years."

Another common problem is that dashboard tend to be moving targets. I can remember creating a new dashboard format each quarter. In the next quarter, executives would inevitable make modifications to the dashboard to reflect the business metrics that they wanted to see that quarter. This moving target effect was not because decision makers wanted to make life difficult for corporate finance, it happened because the metrics that made a difference to the business would naturally vary from quarter to quarter. Trying to make a dashboard in today's changing business environment is often like trying to nail jello to a wall.

All of this just underscores the fact that, when it comes to important matters of business strategy in a large enterprise a dashboard is not the right answer. I certainly can't remember a time that a critical business decision about strategy was made because someone gleaned a critical insight about their business from looking at a red light or business indicator which looked off track on their dashboard. When it comes to important business decisions, there is a reason executives don't log into dashboards, reports, or scorecards. Pretending like business insights can be gleaned from one of these forms shows how diluted (and deluded) our reliance on traditional business intelligence has become. Looking for business insight about strategy requires context around numbers, and you don't get that from a dashboard.



There is a Big Difference Between a Chart and a Construct

Around here we refer to Constructs when we point to a single Chart in our output. One of the most common questions we get is "what is the difference between a chart and a construct?" In fact, even those who may have once been familiar with the difference need to be reminded until it is etched clearly in their memory. There is a big difference between chart and a chart type, and a construct.

The way all other products (Microsoft Excel, Tibco Spotfire, any Tableau products, SAP Lumira, and more) work is to first require a user to select a chart type, then select the data sets(often called things like "measures" "attributes" or "values") required to populate that chart type. Product demonstration usually conveniently skip over this step or make it look a lot easier than it actually is. The outcome, at the conclusion, is something we all refer to as a "chart." Creating each chart requires the same process – want ten charts? Select ten chart types one by one. Populate each chart one at a time by choosing the correct measure and attribute value combinations – usually a long and laborious process of "trial-and-error".

Now want to make a template out of the set of charts so that they can be re-used on any data set, choosing any scenario? Reusing these chart requires ensuring that additional datasets will confirm exactly to the same pattern as the original data, taking care not to break any links. The result of attempting to make reusable templates in this way is usually catastrophic. Now try changing a chart in the "template" a bit and then re-applying that chart within all the datasets you have created. Try creating a new chart to add to the "template" you have created to try applying that to each and every dataset.

Remember, in this case, one "template" has been created. Now try repeating this process a second time in order to create a totally new template. Try a third time, and a fourth, and more.

Managing a team of analysts in a Fortune 100 company working with one "template" of charts we made in Excel which we tried to update quarterly with new data should have been simpler than the exercise imagined above. But invariably links would break, charts would have to be rescaled and even recreated. This simple exercise became so unwieldy that our team would make charts for every data set ad hoc every time they needed to be analyzed.

That is why we created the notion of a Construct. Best said, a Construct is an "idea of a chart" which is completely automated in its creation. Constructs rely on the fact that what all other products refer to as "measure/value" and "attribute" combinations – every possible combination – are built in. A construct is populated automatically when a user selects a dataset to view. Since the measure and attribute combinations are already built-in and defined, users never make any selection, they look at what they think of as a chart instantly.

Now imagine that Constructs are used to create entire templates. Further imagine that a user can add as many Constructs as they want to a template and create as many templates as they want. Since everything is predefined, there is no breaking links or redrawing of charts – all charts render immediately and accurately. Editing templates is as easy as adding and removing Constructs, so entire templates can be created and edited on the fly with no worries about recreating effort or copying and pasting charts in various files in an effort to rebuild presentations.

The difference between a Chart and a Construct is vast. One implies lots of manual work to create hundreds or even dozens of charts. The other does not. Anyone doing data analysis needs to spend some time to understand and appreciate the difference. Building charts is a waste of time. Embracing Constructs means saving time for actual data analysis.


blogblog spacerblog spacer blog spacer spacerwebinar ad
Copyright © 2020 Agylytyx™. All rights reserved.          Privacy Policy          Terms & Conditions          Site Map          Contact
facebook twitter linkedin youtube