Archive for the ‘Uncategorized’ Category

Tony Robbins talks about the effect of taxes and fees on saving capital in his latest book, Unshakeable, and I thought one example he gives in his Unleash The Power Within seminar is so remarkable, simple as it is, that I had to recreate the numbers myself to check how it works.

Say that you start with $1 on the first of January and can put your money (capital) into an investment that will double one year later. At the start of year 2 you will have $2. At the start of year 3 you will have $4. At the start of year 4 you will have $8. How much money will you have after 20 years (the start of year 21)?

You can see in the table below that doubling your money each year for 20 years will leave you with over $1 million dollars. (A 100% return every year for 20 years is completely unrealistic, but it makes the math easier to follow).

What if your money doubles each year, but at the end of the year, someone (a tax collector or a professional money manager) takes 30% of the increase in your capital. So, when your initial $1 doubles to $2, the taxman (or money manager) takes 30 cents (30% of the $1 increase, not 30% of the full $2). How much will you be left with at the end of 20 years?

Again, we can see in the table. The first year, the fee will be $0.30, leaving you with $1.70 instead of $2.

The second year, your $1.70 will double to $3.40, but the fee will be 30% of the $1.70 increase (or $0.51), leaving you with $2.89.

The third year, your $2.89 will double to $5.78, but the fee will be 30% of the $2.89 increase (or $0.87), leaving you with $4.91.

So, by the start of year 4 you will have $4.91 instead of $8 and by the start of year 5 you will have $8.35 instead of $16. This 30% tax (or fee) has essentially left you one year behind by the start of year 5.

How much will you have in this scenario by the end of year 20 (the start of year 21)?

You can see from the table that it will be less than $41,000 ($40,642.31). That 30% fee leaves you in the end with less than 4% of what you would have had had there been no fee at all.

Where did the other 96% go? If we add up the fee collected each year, it only amounts to $29,610.40. In other words, you get about $41,000, the fee collector gets about $30,000 (for a total of about $71,000) and the other 93% of the money – $978,323.29 – simply disappears because it was never in the interest-generating pile of capital.

Now this example is extreme because no investment returns 100% per year every year for 20 years and taxes on capital gains (in the US) are more like 15%, taken only when you sell the asset at the end, not 15% of each year’s gain. However, some hedge funds charge ‘2 and 20’ (2% of your total capital, plus 20% of the increase), so a more realistic example might be 10% growth per year with a 15% annual fee.

Even in that case, you end up with about 76% of what you would have had if there had been no fee, the total fee amounts to about 12% of what you would have had, and then other 12% is simply destroyed.

I guess the same principle applies to non-financial matters as well. If the work you do builds on itself over time, and 15% or 30% of your effort each year goes toward unproductive tasks, then what you are left with at the end of 20 years or so could realistically be only a very small fraction of what it otherwise could have been.

Advertisements

Sales Velocity, the rate at which a sales team brings in money, is a common key performance indicator (KPI) of sales team effectiveness.

But, there are several ways to increase Sales Velocity and it is not necessarily obvious which ways are the best in a given situation.

Vainu.io, a software company that helps you to “identify the most valuable sales prospects for your business”, has produced a short ebook called ‘Mathematics for B2B Sales‘, which deals with Sales Velocity and ways to increase it.

They give the equation for Sales Velocity (amount of sales per time) =

 

Leads in Progress x Hit Rate x Average Deal Size / Sales Cycle Time

 

To show how it works, they give the example of a company that can acquire new customers through outbound marketing.

Outbound sales opportunities are generated by the company’s 10 salespeople, each of whom contacts 5 prospective customers per week. It takes prospects 4 weeks, on average, to either decide to become a customer or to decline to make a purchase. 10% of prospects (the ‘hit rate’) become customers while the other 90% decline. Each sale is worth $9000 on average.

The graphic below shows the mechanics of how this process works. Sales people generate Prospects who make a purchase decision, on average, in 4 weeks. 10% of these prospects buy a unit of the product, with an average deal size of $9000.

Since salespeople each generate 5 prospective customers per week and the decision-making time is 4 weeks, there are 20 active prospects (from outbound marketing) per salesperson at any given time.

Vainu’s ebook lists several ways to increase the company’s Sales Velocity. Some of these suggestions include:

  • hire more salespeople
  • improve the hit rate
  • shorten the sales cycle time

However, not all of these methods are equally effective. We have actually already seen this basic model before, in my earlier post ‘Deeper Into The Equation that Governs Your Sales Team’s Effectiveness‘.

The graph below shows what happens to Sales Velocity when in week 10 we decrease the sales cycle time from 4 weeks down to just 2 weeks.

The Sales Velocity (dollars in sales per week) spikes as the large number of Prospects in Progress make their purchase decision. But, as these people make their decision, the number of Prospects in Progress drops, and with it, the Sales Velocity returns to its base value.

That is, the ‘sales cycle time’ and the number of active prospects are mechanically related to each other. Changing either one of these numbers changes the other.

The term ‘Sales Velocity’ sounds impressive and the equation above makes it look like there are four key levers you can pull to increase it. But ultimately, the term ‘Sales Velocity’ is just another word for the dollar rate of sales ($ in sales per week) and the sales rate must be limited by the rate of generating prospects (the inflow pipe to the pool of Prospects in Progress), not the amount of time it takes for a new prospect to make a decision.

There might be some good reasons to reduce the sales cycle time, but hoping that doing so will increase sales or profit are not among them.

App in the Air is a mobile phone app for frequent fliers that includes flight information updates, airport maps from around the world, local weather conditions and currency exchange rates. It works on a freemium monthly subscription model: a basic version of the product is available for free, while access to a version with more features can be purchased for a monthly fee.

Businesses based on apps like these often share many of the same problems: With thousands of popular apps available for free (or low cost), it can be difficult to get potential users to hear about and download your product. Many people download an app and try it a few times, but then quickly lose interest and never use it again. Other people become regular users of the free version of the product, but never upgrade to the paid version. And many people who are paying customers let their subscriptions lapse after just a few months or a year.

 

Description of the Model

App in the Air’s CEO, Bayram Annakov, built a simulation model to capture much of this process. The diagram below is a simplified version of his work. There are two main portions: the main chain that tracks Potential Users becoming New Users, Retained Users and finally Paying Users. The second portion tracks Cash, income and expenses.

The stock-and-flow structure of the ‘App in the Air’ business, including some important parameter values.

 

‘Potential Users’ (near the upper-left corner of the diagram) download the app, making them New Users. Only about 20% of these new users retain using the app after the first month, becoming ‘Retained Users’. The other 80% leave the system, never to download or use the app again. Each month, some portion of the retained users who have the free version choose to upgrade to the monthly payment plan, making them ‘Paying Users’ and some portion of the Paying Users each month cancel their subscription.

It is Paying Users who generate the company’s income, by paying a monthly subscription price. One of the business’s main expenses is advertising, which raises awareness of the product and therefore helps to drive downloads. In advertising parlance, the amount of money spent to acquire a new customer is the ‘cost per acquisition’, or ‘CPA’. If it currently costs $50 on average to acquire a new customer, then we can find the number of downloads driven by advertising by dividing the ‘monthly advertising spending’ ($20,000) by the CPA. As the pool of potential users dries up, the cost per acquisition (CPA) of a new user should tend to go up. In this model, we simply divide the initial number of potential users by the current number. So, when the number of potential users falls to half of its original value, the CPA rises to $100. When the number of potential users remaining falls to one-quarter of its original value, the CPA doubles again.

The second means for acquiring new users is from referrals by existing users (both the ‘Retained Users’ who have the free version of the product and the ‘Paying Users’). We represent the rate of ‘downloads from referrals’ is by analogy to the rate of infection of a disease. The combined number of retained and paying users essentially ‘infect’ the remaining potential users. The rate of this is: 0.4 (Retained User + Paying Users) (Potential Users / INIT(Potential Users))

 

Identifying Leverage Points

What can we do to have the best impact on the company’s business? With our model, we can simply test these changes and compare the results.

First, let us consider the ‘base case’:

(A) 80% ‘churn rate’ for New Users, 2% of Retained Users become Paying Users each month, and 5% of Paying Users cancel their subscription.

Maybe with some effort we can do one of the following:

(B) reduce the ‘churn rate’ (the fraction of new users who stop using the app within the first month) from 80% to 76%.

(C) decrease the ‘subscription cancel ratio’ (the fraction of Paying Users who give up their subscription each month) from 5% to 4%.

(D) increase the fraction of Retained Users who become Paid Users each month from 0.02 to 0.04.

Which of these four things would have the best impact on the business?

The figure below shows the amount of Cash the business would have over time for each of these four cases. (For each graph, we have stopped drawing at the point of maximum cash on hand. After that point, expenses are always greater than income and the business begins permanently losing money.) In the base case, in the first months the company would lose money, since initially there are no Paying Users. The company’s expenses would exceed its income and it would see its initial pile of $500,000 in cash dwindle to just under $67,000 before the company became profitable. The amount of Cash on hand would peak at about $5.7 million at month 189.

The figure also shows the effect of (B) reducing the churn rate from 80% to 76%, (C) decreasing the subscription cancel ratio from 5% per month to 4% per month, and (D) increasing the fraction of Retained Users who become Paid Users from 2% per month to 4% per month.

Which of these is the best option for the company? Surprisingly, they all are.

For case C, decreasing the subscription cancellation ratio (the fraction of Paying Users who stop their paid plan) from 5% to 4% has obvious economic benefits. It is the equivalent of increasing the average lifetime of a customer from 20 months to 25 months. At a subscription price of $25 per month, this increases the lifetime value of a customer (LTV) by $125. The result is that the maximum amount of cash the company holds would now reach a peak of over $7.8 million around 200 months after the product launch. This is an increase of more than $2 million over the life of the company compared to the base case.

If our only goal is to maximize the total profit the company makes, then it looks like reducing the subscription cancel ratio is the best place to focus our efforts. However, both the base case and case C result in the company early in its life reaching a cash position of less than $80,000. If any of our estimates from the model – the initial pool of Potential Users, the churn ratio, the  rate of downloading the app – are wrong, or if there are any large unexpected expenses in the first few years of the company’s operation, then the company’s cash position could reasonably drop to zero and it would go bankrupt.

To minimize the chance of the company going bust, we should want the company’s minimum cash position to be as large as possible. For the cases we are considering here case D, increasing the fraction of Retained Users who become Paid Users each month from 2% to 4%, results in the company reaching profitability without the cash position ever going below $165,000. The maximum cash position for this case would be $6.55 million in month 158. So, increasing the ‘conversion to paid’ fraction results in less total profit overall, but with a lower chance of going bankrupt. If our goal is to minimize the chance of going bankrupt, case D, focusing on increasing the rate at which Retained Users become Paying Users, is our best option.

But if our goal is to maximize the rate at which we earn profit, then we are better off with case B, reducing the churn rate from 80% to 76%. At month 125, the cash position will be $6.35 million, for a net profit of $5.85 million in only 125 months, or $46,800 per month. We could quit (or sell) the business at that point and move on to the next venture, leaving someone else to squeeze the last profit out of a declining business over its last five or six profitable years.

So, depending on whether our goal is to maximize the company’s total profit, minimize the chance of going bankrupt, or maximize the rate per month at which we earn profit, we should focus our efforts on improving three separate aspects of the company’s business.

Two Futures

I found this article from 1998 online, ‘The Simple Economics of Easter Island‘, by James Brander and M. Scott Taylor.

The authors consider some small South Pacific islands, especially Easter Island, and have built a simple mathematical model that they claim helps explains the island’s growth and decline in human population from its settlement around the year 400 AD until the arrival of Europeans in the 18th Century.

The model is a version of a predator-prey system with people as the hunter and forest as the prey. The forest would naturally grow exponentially at a rate of about 4% per decade, but its actual growth rate is constrained by a ‘carrying capacity’, the maximum area of trees the island can sustain when it is completely covered in forest. As the forest cover increases toward the island’s carrying capacity, the growth rate of additional forest slows down.

People harvest the forest as one of their resources. In the absence of any harvesting, the human population would decline 10% per decade. Population is gained in proportion to the amount of resource harvesting.

The equations in the model are shown below

‘Resource’ is the number of (acres of) trees on the island and ‘Population’ is the number of people on the island. ‘b’ is the fraction of people involved in harvesting trees (so, ‘b x Population’ is the number of people involved in harvesting trees) and ‘a’ is their ‘harvesting productivity’, according to the authors.

Why does the harvesting rate depend on the number of trees? One reason is that the higher the number of trees, the easier it is to harvest them (and vice-versa). As the number of trees drops, workers must go further from their homes to access the remaining trees and use more time to bring the wood back to the village.

The authors suggest that 40% of the population is involved in harvesting resources. “Various pieces of evidence suggest that the resource sector probably absorbed somewhat less than half the available labor supply. A value of 0.4 for B is probably in the reasonable range.”

These equations and parameter values reproduce what the authors claim was the basic population dynamic over 1500 years of Easter Island history. A small population grew to about 10,000 people by the year 1200 and then slowly declined to around 3000 people by the time of first contact with Europeans.

I am interested in the values of Population, Resources and ‘harvesting Resources’ if we run the simulation for a long time. For the case of b=40%, the Population will eventually settle down to around 4790 people and the resource reaches about 6250 acres of trees (just over half the carrying capacity) if we run the calculations long enough. Thus, there are about 1.3 acres of trees per person. The long-term harvesting rate is about 0.025 acres (about 10 meters by 10 meters) per person per year.

What if the value of ‘b’ (the portion of the population involved in resource harvesting) was some other value, like 20% or 80% or 100%? What would happen to the Population, and Resources per person?

The graph below shows the long-term (that is, ‘equilibrium’) Population for various values of ‘b’.

If ‘b’, the fraction of people who are involved in harvesting resources, is less than about 20%, the population eventually falls to 0, as the rate of losing population always exceeds the rate of gaining population. For this model, the largest sustainable population is about 4800 people when ‘b’ is 42% and it falls to about 3166 as ‘b’ rises to 100%.

One measure of the prosperity of the Easter Island society is the number of acres of forest per person (that is, the quantity ‘Resources’ divided by ‘Population’). The forest serves not only as a source of wood, but also as a habitat for birds and nuts the islanders can eat, so more trees per person is, in itself, a measure of how prosperous the society is.

The graph below shows the long-term (‘equilibrium’) ratio of Resources / Population for various values of the ‘b’ parameter. When 40% of the population is involved in harvesting resources, the Population eventually reaches its largest possible long-term value. That point is marked with a dark blue dot on the graph.

Raising the portion of the population harvesting resources above 42% causes the equilibrium population to decrease and also causes the Resources/Population to decrease. When 100% of the population is involved in harvesting resources, the population settles down to about 3166 and the Resources/Population to about 0.8. (That point is marked with an open circle.)

There is another value of ‘b’ that also results in a long-term population of 3166 and that is when the portion of people harvesting resources is around 26.3%. In that case, the Resources/Person is about 3.

Another measure of the prosperity of the society is the per-person rate of harvesting resources (‘harvesting Resources’ / Population). The forest is a source of wood, which is useful for making houses, canoes, tools and fire. So, the amount of forest cut down per time tells how much wood is available per person. Interestingly, for all of the different ‘b’ values above 20%, the long-term ‘Harvesting per Person’ is exactly the same: 0.025 acres per person per year.

So, depending on the value of ‘b’, two different Easter Islands are possible. In one, the portion of the population harvesting resources is at or below 40%, the resources per person is 1.3 or more, and the forest covers at least half of the island.

In the other version, the portion of the population harvesting resources is above 40%, the resources per person is below 1.25, and the forest covers less than half of the island.

Which version of the island would you rather live on?

The good news is that, in this model, is it possible to reversibly switch from one version of Easter Island to the other, although it can take several hundred years for the system to settle down and the population can change (read: drop) dramatically during the transition.

The most efficient organization is not necessarily the most effective. It might sound crazy, but it is straightforward to show that it’s true.

Think about a charity run entirely by volunteers that provides meals to hungry people. This charity receives $1000 per month in donations, which they use to buy meals for hungry people. Food is literally their only expense. They wish to increase the monthly money available for buying meals, so they decide to start advertising to bring in more donations. They find that when spending no money on advertising, they got no advertising-driven donations (obviously). By spending $100, they can get $300 in advertising-driven donations. By spending $225, they can get $450. And by spending $400, get they can get $600. How much advertising should they do?

The first $100 in spending will bring in $300. These donations, minus the $100 in advertising cost, leaves $200 extra to provide meals. Spending $225 brings in $450, leaving $225 extra to provide meals. And spending $400 generates only $200 extra after the advertising cost is paid. So, of the three, the maximum amount of money to provide meals comes from spending $225 per month. This is the optimal amount of money for this charity to spend on advertising.

[These numbers follow the ‘square root rule of advertising’ by the way, specifically REVENUE = 30 x SQRT(SPENDING). If you are inclined, you can check for yourself in Microsoft Excel that, for this particular case, the extra money for meals (donations minus advertising spending) is the highest when advertising spending is $225.]

By spending $225, they can take in $450, giving $225 more than they would have had otherwise. By advertising, they have $1225 rather than $1000. But, charity organizations are often judged by the portion of their donation that goes toward “The Cause”, the purpose for which they exist, as opposed to business expenses like office space, staff salaries and marketing. If they do not advertise, 100% of their income goes to food. If they do advertise, then only $1225/1450 = 84% of their income goes to food.

By increasing the amount of service delivered, the charity reduces its efficiency.

Let’s say our charity makes an effort to get better at advertising, to better understand potential donors and get more bang for its advertising dollar. Now, its ad-driven donations are equal to REVENUE = 40 x SQRT(SPENDING). So, if the organization spends $100 on advertising, they receive $400 in ad-driven donations (on top of the $1000 they get automatically). If they do $225 in advertising, they get $600 in donations. And if they do $400 in advertising, they get $800 in donations. (Remember, the equivalent values for the previous charity were $300, $450 and $600. So, the second charity is simply better at advertising regardless of how much it spends.) So, this organization simply gets more revenue per advertising dollar spent, for all possible levels of advertising spend. It is simple a more-effective organization at advertising.

charity-before-after
How much advertising should the charity do now?

We can check different values and see that to maximize the amount of money left over to deliver meals after advertising costs, they should spend $400 per month on advertising and receive $800 in ad-driven donations. This takes in a total of $1800 per month ($1000 from regular donations, plus $800 from advertising-driven donations), leaving $1400 per month to provide meals. For this more-effective charity, the amount of money spent on meals relative to their total revenue would be $1400 / $1800 = 78%.

The charity is now undeniably better at advertising and therefore spends more on advertising. But the result is that the more-effective charity’s advertising spending is higher (22% compared to 15%), which means that the charity looks more wasteful and less efficient when in fact it is simply better at getting results from advertising.

I recently participated in two separate ‘startup incubator’ brainstorming programs, in the same city, a couple of days apart, with two different high-tech companies operating independently of each other. It was surprising to see how different was the quality of their results (by my own estimation), despite only a few apparent small differences in how they were run.

Both programs had 70-80 motivated and well-educated adults from diverse national and cultural backgrounds. They divided up into 10 teams of roughly equal size. Each team worked intensely over the course of a day or two to generate one new business concept for a product or service that could be offered to the marketplace. At the end, each team gave a 3-minute ‘pitch’ presentation describing the idea they had generated, there was a short question-and-answer period after each presentation, and a winner was selected from the 10 pitches.

Both programs had comfortable conference areas with tables and chairs, food, fizzy drinks, electricity and internet access, large pieces of paper and colorful pens and little more than that. No big piles of cash were dropped into anyone’s lap.

Both program organizers offered their teams a list of questions to help guide the idea formation. In one case, this was in the form of Osterwalder’s Business Model Canvas, which you can see below. In the other, essentially the same topics were covered, but as a list of questions starting with Who…, What…, Where…, How…, etc.

business-model-canvas

Having seen all of final business ideas, in my opinion the results of one of the programs seemed greatly superior to the other, both in the slickness of the presentations and in the quality of the underlying ideas. This got me thinking about what differences there were in this ‘Star’ program that might have made the results be (or maybe just seem) better than in the ‘Regular’ program. Here are a few things I noticed:

1. In the Star program, the teams were formed according self-expressed interest
The Star program started with 10 self-selected people making one-minute presentations about what their basic business idea or topic was. After this pitching session, these 10 team leaders scattered around the room and the remainder of the participants came to find out more about the ideas that intrigued them. The participants each wore sticky notes briefly describing their background and skills. The team leaders got final say in selecting who would be on their team.

In the Regular program, the team leaders and each team’s members were selected by the overall program organizer. Each participant was told a couple of weeks before the program day which team they would be on, the leader was selected by the program organizer, and the team was asked to begin brainstorming a business idea topic that they would develop on the program day.

2. In the Star program, there was a strong emphasis on validation, validation, validation
At the beginning of the Star program, participants were given an explicit list of criteria on which their idea would be judged. One-third of these criteria involved the extent to which the team had demonstrated that (1) their idea will actually work and (2) that there exists a market for it. The teams consisted of both experts in topic and non-experts, and all of the teams had intermediate appointments scheduled with outside coaches, some of whom were experts in the team’s idea’s general industry and some were not.

In addition to ‘sanity checking’ their ideas with experts and non-experts, the teams in the Star program were asked to contact potential customers. Several teams put surveys out on SurveyMonkey and Facebook. One got more than 50 replies from within their city between the hours of 10:00 in the evening one day and 8:00 the next morning. One went out to meet with prospective customers and another went out to shoot a demo video. Another set up a functional website that described the team’s idea and provided a box for website visitors to enter their e-mail address if they wished to learn more. (5 prospective customers signed up.) Several contacted support staff from different companies that might be key suppliers or partners. One even set up an in-person meeting with a key partner.

All of the teams provided basic financial estimates of unit cost and unit revenue (and therefore, unit profit) and market size. In the Regular program, the teams were not asked to estimated market size or the profit potential, and so none of them did.

In the Star program there was a strong emphasis on results and action over talk and deliberation. The participants were even told “less talk, more action”. In the Regular program, to the best of my knowledge none of the teams spoke with outside experts, contact potential customers, or even left the building to get fresh air.

3. In the Star program, teams were told what to do, not how to do it
The Star program had no template for the final presentation: as a result, every presentation was unique, some with very high quality graphics, professional looking logos, creative videos, working iPhone app prototypes, a functional website. In the Regular program, the teams were given template presentations where the teams had to ‘fill in the boxes’. Most teams used this template with little modification, and some added additional slides of nice-looking photos to demonstrate their ideas.

4. The Star program had significant outside involvement
In addition to the already mentioned outside coaches, the Star program’s judging was done a panel of 9 people, including local business founders, a city politician and a professional venture capitalist, none of whom were involved in the coaching. In the Regular program, the judging was entirely by the program participants and program organizer, with no one even from within the wider company.

5. The Star program’s ideas went through a larger number of iterations
The final difference that I think might have been important was the duration of the programs and the emphasis on iterating through ideas. The Star program ran for 48 hours, while the Regular program was a single business day. In the Star program, participants were encouraged to ‘Build, Measure, Learn’. The results of the online survey indicated that one team’s initial idea might not have a market interest. The results from the survey were used to ‘pivot’ to a related, but distinctly different, idea. The final presentation was a third idea.

There might have been other important differences, or maybe I am wrong in my assessment of the quality of the final ideas, but if I were in charge of a similar business concept brainstorming day, I would work to make the day more like the Star program and less like the other one.