Editor’s note: Microsoft is a client of the author.

Every vendor that has ever briefed me on a productivity product has argued that it will significantly improve employee productivity. Few vendors have proven that outcome, however, so the claim has become something of a bad joke over the decades. That’s not to say that there are never any productivity benefits, but when you make an employee more productive, it is almost impossible to measure by how much — and whether the product made a difference. When you free up time, it’s not as if employees suddenly don’t have things to do; they just move on to other tasks. And if they are salaried, you may not even see a difference in output because they may be working longer hours to get their tasks done. 

Every product that claims to boost productivity needs a metric that a buyer can rely on to justify the claim. While I can understand why vendors haven’t pushed for a productivity score, I’m surprised buyers haven’t required this to validate vendor claims after deployment. 

This week, Microsoft moved to address the problem, at least with its Microsoft 365 line, with a new Productivity Score effort launched Oct. 29. This is particularly timely because with people working at home and companies reporting productivity improvements, we need to answer a critical question: Is the employee more productive in terms of work-per-hour or merely by working longer hours. (The latter could be damaging the quality of their lives and eventually lead to a backlash.)

The nature of a productivity benchmark

Now, if folks are on an assembly line, say making candy, you can measure how much candy their station passed through. (Granted, you’d have to watch for cheating and whether the station was a bottleneck.)  But knowledge workers have many different tasks and requirements such as whether they are effectively collaborating or using their time in the most efficient way possible given the tools they’ve been given. You’d also want to compare one similar employee to another and get recommendations about how to improve the score, recommendations that would go to both management and to the employee.

Critical to complex tools like Microsoft 365, which has productivity features users may not know about or have time to learn, offer enough granularity to make comprehensive recommendations and optimize the workforce. 

Now employees don’t like being monitored, so any tool needs to be carefully structured so it doesn’t look like the data is being used to deny promotions, raises, or set workers up for termination. The focus here should be to help the employee, not penalize them. Otherwise, workers are likely to resent the tool and either disable it or game it, thus making it unreliable.  (I recall a videoconferencing trial we did with Apple in the 1980s where workers disabled cameras because they thought managers were spying on them). 

More on the Productivity Score

The new Productivity Score tool appears to do much of what I’ve called for.  It looks at both what employees are doing and how they are doing it. It will use a benchmark of best practices to highlight things known to increase productivity, such as whether meetings have agendas and whether follow-up memos have gone out to confirm commitments. 

It measures the level of use and engagement on tools like Teams. Often times, such tools are deployed, but never used because workers fall back on whatever they were already using. The tool will then provide recommendations to push employees to use these collaboration tools more consistently and more effectively. 

Microsoft’s tool is pretty comprehensive: it looks at how the employee is doing and whether their ecosystem is optimized for your work. It looks at network health and connectivity, application quality (whether apps are current and fully patched), and other environmental issues that may negatively impact employees by region. This coverage helps IT prioritize and budget for solutions that will mitigate systemic problems with the firm’s tools and technology (including endpoint analytics and performance). 

Because this tool needs deep integration, Microsoft’s Productivity Score tool doesn’t measure competing products is problematic. It’s not that there isn’t a need, but the level of transparency and cooperation between vendors doesn’t make such a tool viable yet. Still, the need remains. 

That said, having this tool does provide Microsoft with a competitive advantage because it differentiates Microsoft’s offerings. You can find a comprehensive video of how this tool works here; it is surprisingly comprehensive. 

Wrapping up: Just a start

The market has needed total productivity scores since we’ve had productivity tools. It makes no sense to have a tool where you can’t effectively measure its impact. And so many tools given to employees are either misused or not used. That isn’t good for either the buyer or the seller. The buyer doesn’t get the value they paid for and, when that happens, the buyer will likely abandon the tool and share their displeasure, hurting sales. 

Microsoft is a bellwether, which means it can set a trend. I’d anticipate other vendors coming up with similar tools. And, given a choice, buyers will likely prefer a tool that can demonstrate its value over those that do not.  While it is far from trivial or easy to create a tool like this, I expect they will be far more common in a few years as experienced buyers build this capability into their RFPs. 

In short, we’ve needed something like this for decades; their emergence should not only help companies uncover the actual value of the products they buy, but provide vendors with critical information they can use to improve their wares. 

Copyright © 2020 IDG Communications, Inc.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here