When writing blog posts, we usually reply to one or several other posts, quoting at most 2–3 extracts of them. In emails, it is fairly common to keep replying and articulating refinements as further thoughts are spurred. Why don’t we do it on blogs as well? I don’t know, but I’ll try here, and it may prove interesting. Let me know what you think, positive or negative ;-)
I recently asked Olivier Amprimo to examine my argument around pricing for Enterprise Social Computing offerings. He kindly did it with an excellent post, so this is my reply, a bit in the spirit of old-fashioned correspondence. Olivier gave some background info in his post, and let’s just add that I have rarely witnessed such deep thinking—his blog is full of excellent material as well. Obviously, there is much more in Olivier’s analysis, than the points I reply on here. (You may want to read the post Olivier scrutinize, then his before this one)
Let’s start with a basic typology. There are at least 4 different types of Enterprise Social Computing offerings, each enabling a different set of capabilities for the organization:
- A core foundational platform for enterprise social networking: think of it as Facebook or LinkedIn redux inside an organization. Employees have rich profiles, explicit connections, an activity stream tied to their network and an open architecture allowing to pull activities from the existing business systems. Newsgator is an excellent example.
- Specific but corporate-wide tools available to all employees: here we have the Twitter-likes, Youtube-likes, etc.
- Persistent, large-community, but not corporate-wide social tools: wikis, blogs, and so on used by specific and persistent communities. Traditional communities of practice exemplify this type of needs.
- Perishable social tools used by focused teams, usually project or business teams: these are the wikis used within teams to prepare the business documents for example, a TeamSpace if using SharePoint, etc.
Onto Olivier’s post:
Fundamentally, Julien makes the assumption that Enterprise Social Computing offering is all about product offering.[…]
An Enterprise Social Computing pilot is not about testing an off the shelf solution, it is about building a contextual platform, with mashups and tweaks. Yes, convergence happens in the social computing software. Products are embarking more and more features. Lines are getting unclear between blogs, wikis, social networks and … But there is no off-the-shelf social stack. The most expensive is therefore not the software; it is its implementation. It is not the product; it is the know-how and the sensemaking.
When testing offerings falling in Types 2, 3 or 4, I would agree. But the license fees for a good platform, even for a pilot period are important and rival the implementation costs. While there is no off-the-shelf social stack, organizations need to start with the best possible platform (best is highly contextualized to each organization). Often times, the fees for this kind of platform, with a regular volume-discount pricing, are taking up a good proportion of the budget. Even if just equal to implementation costs, license costs are material in launch decisions.
As a result, the pilot phase is actually the most expensive phase. It is where one injects know-how and meaning. Post pilot costs are just deployment costs and, sometimes, additional customization.
My point exactly. As Olivier says, pilots are already incredibly expensive in terms of know-how. If you pile high license fees on top of it, as you do when using volume-discount pricing, then the cost of the pilot explodes along with the risks, and the potential ROI of a full deployment sinks. Hence my arguing for very low pricing for a pilot, essentially cost-offsetting, before increasing the pricing as usage increases.
Julien makes the assumption of a standard and enterprise-wide deployment, upfront. This works well with traditional (non-social/individual) client applications (such as office) or with Enterprise-wide process applications (such as an ERP or a CRM). The problem is that there are hardly any standard and enterprise wide deployments of social computing, upfront. Social computing tools are not addressing the same issues as traditional IT ones. The deployment is progressive as social computing tools address contextual and previously implicit interactions around explicit, usually enterprise-wide processes.
We disagree on this point, and here’s why: even though any deployment will be phased, the business case is built with the sought-after end in mine: a global deployment. Pilots are for tasting the water. Generally cheap in terms of funds, always expensive in terms of resources. Clients don’t test technology stacks without being assured they will be able to deploy if they want to. Hence, they negotiate and lock pricing in before they start the pilot. If they don’t, the price will unfortunately have doubled when they want to pursue ;-)
That’s why they have a network effect as Julien rightly noted, but that is also why the logic of applying the network approach to pricing is difficult.
1. One approach is the game theory that Jean-Lou Dupont uses on RWW: “it only takes *one* detractor (i.e. someone who sells an equivalent service at better price… that’s what competition is for, no?) to make this theoretical model fall down”.
Somehow I fail to see how the incentives work that way. Volume-increasing pricing:
- is not changing the total cost (for the client) or total revenue (for the vendor). It simply changes how it’s varying over time. The Total Cost of Ownership when fully deployed is the same.
- makes initial pricing per user very low and even free when the reasoning is applied 100%. How a competitor or free-rider can emerge in this context?
As a result, if you have to play with curves, you might want to play with the Long Tail. Because what social computing caters is the long tail behind the firewall.
Read further a previous post of mine.
Slide 12 of Julien’s enclosed scribd-ed KeyNote presentation, he states that “it is important for the client to make it available to all its employees: which groups of employees will recognize its value first is unknown, and you may not target the correct group if you do a target deployment”.
Julien works in an environment that is very specific, yet common. He is entrenched in IT (that is used to think “product’) and at a corporate level (that is far from operations, which happens to be very true in his industry). The combination seems to let him thinks about standard products that are easily deployable top down. One size fits all. Experience shows it tends to be time consuming, financially costly, operationally disturbing and employee frightening.
Back to my opening typology: for a corporate-wide enterprise social networking foundational platform, 1 platform is needed, not several living together. For specific but corporate tools, one standard tool is also needed. If organizations want to share videos internally, they are looking for one Youtube-like tool, not many which will compete for a critical mass in the knowledge pool built. For type 3, persistent tools, standardizing the platforms also ease the burden on the user by avoiding using different wiki tools when switching wikis for example. Not to mention on IT, I know :-) Only for Type 4 social tools can this be envisioned.
But the ROI needs to be taken into account, and yes, there are two parts: returns and investments. Two different scenarios can be envisioned for deploying wikis used in teams for example:
- Let users choose the wiki application best suited for their needs, and let they deploy and manage them on an ad-hoc basis.
- Recognize the need for on-demand and fully customizable wikis, but enforce a standard tool to create them, like Atlassian’s Confluence.
In case 1, you offer full control to the end-users to choose the tool they want, but this would inevitably result in a costly mess to maintain. In case 2, 95% of the control is given back, but the tool is standard. The ROI will of course differ widely in the two cases. Standardizing always brings cost-savings, and can more and more be done without impacting the returns on the applications.
In fact, it is a question of perspective (sensemaking) and methodology. As in charge of innovation, Julien is in a position to go for pilots, which means that he is in a position to search for and interact with the “correct group” to build a successful (or not) use case. That’s empiricism. This would help him get a sense of the impact the pilot might have at a larger scale, as well as the difficulties of pre-requisites for a successful implementation (with enterprise-wide in mind on the term).
This is easy to do when you’re looking at building capabilities enabled by tools falling in type 3 and 4, and this is what most companies do. How do you do it when you are looking to deploy an enterprise social networking platform? A twitter-clone? Within large organizations, whether or not you’re in IT, you can’t possibly know all the different groups operating with their own culture, goals, processes, incentive structures, etc. Targeting is useful but not optimal at all. If you target, you may hit some “correct groups”, but then again you may not. Even if you do it exactly the right way, facts of life within large organizations can turn a correct group into an incorrect one at a high velocity.
The best strategy seems to be to communicate and publicize the capabilities, then let the groups who would benefit from the technology enablers use them.
A particularly interesting approach in this conversation is Atlassian’s pricing policy and business model. Atlassian has a traditional pricing structure in the software industry, to a certain point.
Atlassian plays on:
- Lowering the entrance barrier.
The price is below the usual amount of money that requires the workflow of informing a lot of people to get one signature, as well as many opportunities for loads of “why” and a “no” and is capped. People are thus empowered to test the product, customize it to cater local and contextual needs. By doing so they potentially build a usecase and start the grassroot movement below radars. The client becomes the reseller. But the pricing is made in a way that if the first year is affordable, the next year ones are more expensive that traditional software: 50% of the initial license (vs 10–20%), not to mention the adjustment of users. So in the end, on a 3 year basis, the cost of the software is not cheaper than traditional software, but the product is up and running and inhouse (so that it is too late to say NO!)
- Volume of portfolio.
Lowering the entrance barrier is the best way to have a large client base. And because the balance between first and next year is 2 to 1, the turnover of Atlassian grow substantially mechanically.
- Nice and robust products.
They are self-explanatory thanks to a meaningful interface, that embarks contextual help and an exhaustive help documentation one click away. As such, they require nor cost nor effort in the support and maintenance phase (n+1 and beyond) and ensure that Atlassian milks the cow, with style
Yes, Atlassian’s pricing is smart. But the cow they don’t milk. One condition such pricing works well is that the unlimited user license is still not highly priced. If it was, then we would not see the same acceptance of the products. Yet, it fails to capture the producer surplus it could when used in large organizations with per user pricing, albeit not the tired old way.
Julien these were my 2 cents, happy to discuss this further.
I certainly hope so, although you might want to avoid this correspondence style :-) Not sure how this works for readers…