Please log in to watch this conference skillscast.
How can we apply technology to drive business value? For years, we've been told that the performance of software delivery teams doesn't matter―that it can't provide a competitive advantage to our companies. Through six years of groundbreaking research, the DORA team set out to find a way to measure software delivery performance―and what drives it―using rigorous statistical methods. This talk presents the findings of that research, including how to measure the performance of software teams, what capabilities organizations should invest in to drive higher performance, and how software leaders can apply these findings in their own organizations.
Question: I’ve found it really hard with most of the technologies we use (Java/Typescript) to have (and maintain) a set of tests that are fast enough to run on every commit. Do you have any tips?
Answer: I recommend checking out this blog post from my former TW colleague Dan Bodart: http://dan.bodar.com/2012/02/28/crazy-fast-build-times-or-when-10-seconds-starts-to-make-you-nervous/
He did a talk too: https://www.youtube.com/watch?v=nRDlYvIbSBU
Question: For Lead Time -> do you consider time to get to a dev team? i.e time from conception of an idea to production? or just from commit to production?
Answer: Just commit to production. The "fuzzy front end" is a fundamentally different domain (product development) where high variability is a good thing. The delivery domain is supposed to be low variability.
Question: Have you captured “age” as part of your “firmographics”? Age is often sometimes equated to maturity but does it correlate with organizational performance in any way?
Answer: You mean age of the organization? No, we haven't looked at that. I have doubts as to its reliability as a measure because so much of performance is team level, and the "age" of a team is super fuzzy - how do you control for the impact of re-orgs, people leaving and joining, changes in management etc. Determining if there is any evidence that organizations get better or worse over time (when corrected for other factors) would require a longitudinal study, which is a good idea, but we don't do that.
Question: How well do you think your “secrets” could be applied to government? Do you think it could improve the management of pandemics?
Answer: We actually got a nice message from a team in the NIAID saying the changes they'd made as a result of the assessment we did there had helped them move more quickly when COVID hit. So yes, I think it can. And my personal experience working on the cloud.gov team is that these ideas definitely work in government. (Also, we find in our analysis that the results apply equally well for respondents who say they are in government).
Question: Will there be a DORA State of DevOps Survey and Report in 2020?
Answer: We have some things in the works but there's nothing I can talk about at the moment. Nicole is now at GitHub, and she released something related to your question back in May: https://github.blog/2020-05-06-octoverse-spotlight-an-analysis-of-developer-productivity-work-cadence-and-collaboration-in-the-early-days-of-covid-19/
Question: Is it possible to have highly aligned, loosely coupled teams working on the same delivery? Is scaled agile advisable? Does it come down to how you slice up the work so you aren't stepping on each other's toes?
Answer: It really depends what you mean on "same delivery". If you mean, working towards the same outcome: yes, and Scaled Agile isn't necessary. If things are tightly coupled that is going to be harder, and scaled agile can help as a countermeasure, but ultimately it can only get you so far: you really have to invest in decoupling organizational structure (small, cross-functional, autonomous teams) and enterprise architecture (making sure you can independently test and release your services - in other words, having a true service-oriented architecture)
Question: Is chaos engineering a core part of google cloud too? Just had that question when you shared about the DiRt remarks.
Answer: With the methodology we use to do the research, we can only really look at practices that are reasonably widely adopted. Chaos engineering is still very emergent. It's definitely very closely related philosophically though. Oh sorry, you're asking about whether we do chaos engineering of the Netflix flavor at Google in particular - I am still very new to SRE so I don't actually know the answer to that, sorry.
Question: Asking for a friend: organisational transformation through building communities and seeding PoCs takes time for the gnarliest of problems. It doesn't look like anything's happening, whereas a rolling out of Best Practices from a Centre of Excellence looks and sounds like Great Strides are being taken. Even if the grassroots efforts are regularly communicating progress and success, if the problem is hard enough that it takes effort to understand the subtleties, then most decision makers in the organisation - who are busy with other things because of the trying-to-do-it-all-at-once problem - will buy into the easy, just-do-what-the-Centre-of Excellence-tells-you-to-do option. Any advice on the most effective lever to attempt to pull here? Get the communities of practice shouting louder, tell the CoE to learn patience, or (somehow) reduce WIP across the org?
Answer: (Nick) Make the work visible. Then people can discuss the work in the CoP's and you get real synergy. CoE's are pretty much useless IMHO (see DORA metrics 2019). But you have to be careful not to let management co-opt the process.
My favourite Deming quote is "when a measure becomes a target then it stops being a measure". So, for example, if you make the number of passing/failing tests visible at each level, some bright spark will put a 90% coverage target on it or something. Every team will then hit 90%, no matter what it takes, rendering the measure useless.
(Jez) Targets of this kind can work provided they are set by the team. What leads to the behavior you describe - and which I have certainly seen - is targets set from above.
(Nick) Also on reducing WIP, I'd say the same - make the work visible. Often it isn't visible so if you make it visible the burden becomes obvious and people start to talk about it. One way you can provoke this is to make batches smaller (which leads to faster cadence). In big batches it's easier to hide the WIP and the overhead it produces (aka Don Reinertsen "Flow"). It will also expose bottlenecks where the WIP piles up. In Lean they call this "lowering the water level to see the rocks".
(Jez) In addition to what Nick said, I think there's also a cultural change that needs to happen. Managers and execs need to see their role less as making stuff happen, but rather as creating a culture where the people doing the work can get stuff done, removing obstacles, helping people acquire the necessary skills to succeed, increasing alignment and transparency.
One way I've seen to demonstrate progress is for management and execs to invest in community events, like an internal devopsdays where everybody gets a day off and - crucially - the people doing the work get to set the agenda, including an open spaces where anyone can talk about what they've been doing.
YOU MAY ALSO LIKE: