At Coveros, we regularly perform technical and process evaluations for customers. This work is usually the first piece of any consulting engagement and allows us to form an implementation plan based on a sound understanding of the current situation. We also occasionally simply produce detailed technical evaluation reports that allow a customer to implement recommendations themselves.
After doing many of these, both technical and non-technical, I have determined some best practices for technical and process evaluations, along with some pitfalls to avoid. This advice is suitable for both formal, consulting evaluations, as well as more informal, internal team evaluations and self-assessments.
DO Make it a partnership
To be able to do a complete and thorough evaluation you will need the help of the people doing the work being assessed, as well as the management that is overseeing that work. For example, if you are doing a technical application evaluation you will need the help of the developers and teams who built, or are currently building, the application, as well as the product and/or technical managers to whom they report. They will need to give you access to the latest source code and test automation, walk you through the DevOps pipeline, and answer questions about system architecture. Similarly for process and organizational evaluations you will need managers, scrum masters, and the leaders who put the current processes in place. Managers are critical because ultimately they must approve and prioritize any work the team does based on the recommendations. Therefore
It is important that you emphasize early and often that you are assessing a particular technical implementation or the current process and that you are not assessing individuals. In any write up you provide, be careful that the language emphasizes the processes and the technologies and not individual people.
DON’T Make assumptions
Ask questions. If you don’t see something, ask if it exists. One of the best ways to ruin your credibility is to recommend as improvements processes they are already employing or to mention as a finding that they don’t do something that they actually already do. We recently assessed an application development process and wanted to make a recommendation that they perform full automated builds on feature branches. When we talked to the dev team we found out that we misinterpreted how they managed feature branches and pull requests and they were, in fact, doing full automated builds on feature branches; it just wasn’t well documented. Once we actually understood the process, we could either give them kudos on following a good practice or give them real, concrete recommendations on how to make their current process a little more transparent.
The corollary to this is to clearly state any assumptions you make and, if a recommendation is needed, phrase it in a way that mentions how to deal with it whether the assumption is true or false. This puts your recommendations in context so that the team understands what parts you looked at and what you didn’t, and gives them something concrete to take away no matter what.
DO keep the customer informed
Hold regular meetings with the key stakeholders to discuss progress and show them incremental pieces of the final product. If you are producing a report show them an outline as soon as possible. If it’s an implementation plan, show rough drafts of that plan. Meeting with them regularly and showing progress gives the customer an opportunity to provide feedback. You want your final product to be easily consumable by the customer. If you show them where you are going, they can provide feedback as to whether the format you are using makes sense to them. If you will not be staying on to implement your recommendations, an agreed upon format will increase the likelihood that the customer is able to get real value out of your recommendations.
DON’T play the blame game
It is nobody’s fault that the process or technical implementation is the way it is. This point is critical to remember and dovetails with the partnership concept. The point here is to make sure to present any findings as factually and as objectively as possible.
DO base recommendations on solid intellectual grounds
The analogy I use with my teams is that we are building a judicial opinion based on precedent and existing case law. Recommendations and evaluations are in many ways very subjective, and that is by design, but you must have solid reasoning behind your recommendations.
At Coveros, many of the recommendations that come out of our our DevOps and Agile evaluations are based on a detailed scale we have developed in our more than 10 years of doing evaluations. The results and experience we have allow us to benchmark against others in the industry, giving inherent credibility to our recommendations. For evaluations that don’t fit into one of our formal rubrics we turn to industry standards, research done by other trusted organizations, and well-known best practices. This is all very important because ultimately someone will need to convince a manager to fund the work required to implement your recommendations. Give them something more than “because I said so.”
DO make recommendations concrete
Recommendations should be actionable, and preferably, actionable almost immediately. Avoid recommendations like “improve team collaboration.” This phrase doesn’t provide any actionable steps that could be implemented to improve collaboration. Instead try something like “relocate the team to a common shared team room” or “have each developer pair with a tester for several hours every sprint.” Provide references that give additional details on solutions you recommend.
These are just some techniques I have found useful. What is your experience? Please comment with any tips you have for technical recommendations and evaluations.