Please log in to watch this conference skillscast.
Wouldn't it be great if we could rank developers based on their productivity, reward the best ones, and make sad noises at the others? Don't be silly, developer productivity isn't an innate ability that you can measure and rank! This talk will discuss how to think about and improve productivity based on the longest-running academically rigorous research investigation into the practices and capabilities that drive high performance in software delivery. Find out what drives productivity, how you can do more of it, and why it's important. Discover how tools and culture both impact productivity, and how to make a research-based argument for reducing technical debt.
Question: How do you do continuous delivery with software that gets installed on the client rather than the server? e.g. native mobile apps, which may need app store approval
Answer: Continuous delivery for apps is the same as CD for services (Remember, CD is about making releases boring, not releasing all the time). Make sure you're doing CI, have automated tests, and comprehensive configuration management in place (https://continuousdelivery.com/foundations/) and implement a deployment pipeline.
The final stage of the deployment pipeline is going to be the release to the app store. You can (and should) also do things like upgrade testing as part of your deployment pipeline. Back in 2008-2009 I was product manager for go.cd, which is user-installed software - we used CD to build go.cd, even though we only released 3-4x per year. There's also a nice case study for CD for firmware for printers (which again was only released a few times a year): https://continuousdelivery.com/evidence-case-studies/#the-hp-futuresmart-case-study
Question: What are your thoughts around considering “Lead time for changes” as
“the time between merging a PR until it is live in production” vs
“the time between starting to code and the change being live in production”?
ie: Do you track “development + deployment” or just the “deployment” process?
Answer: Lead time is defined from the time you start to code. (at least, for the part of the domain we care about). And the reason for that is we want to optimize the part of the process from writing code to getting it merged too. Our research (which has been reproduced) shows that working in small batches off trunk drives higher performance. In particular, teams do best when they:
- Have three or fewer active branches in the application's code repository.
- Merge branches to trunk at least once a day.
Question: I think they’re all important in their own ways, and may reveal bottlenecks in different parts of the full SDLC. I’d even go so far to suggest measuring the time between ideation (hey, I have this idea) and change deployed in production.
Answer: There are actually two domains we're interested in when doing software delivery
|Product Design and Development||
(Build, Testing, Deployment)
|Create new products and services that solve customer problems using hypothesis-driven delivery, modern UX, design thinking.||Enable fast flow from development to production and reliable releases by stadardizing work, and reducing variability and batch sizes.|
|Feature design and implementation may require work that has never been performed before.||Integration, test and deployment must be performed continuously as quickly as possible.|
|Estimates are highly uncertain.||Cycle times should be well-known and predictable.|
|Outcomes are highly variable.||Outcomes should have low variability.|
(from Accelerate p15)
Here we're talking about delivery, and we want to start the clock ticking the moment we start checking into version control.
Question: Do we have a measure of how much "stuff" we've delivered per unit of time? Lead time to deploy and deploy frequency doesn't seem adequate.
Answer: No, we don't, and that's the biggest difference between software and manufacturing, and the reason why lean needs to be adapted when moving between the two. That's part of what I was getting at when I was talking about lines of code and outcome vs output. Obviously nobody is going to pay for a tiny or half-built car when they're expecting a regular one. But people will absolutely pay for a solution delivered in half the lines of code than an existing one if it better solves their problem. All the ways of measuring "stuff" in software are arbitrary and subjective and, perhaps most importantly, have not been shown to drive organizational outcomes we care about such as profitability, market share, productivity, customer satisfaction. However the four metrics I've shared do drive those outcomes, so we know they matter.
Question: Can you give your opinion of Corporations using tools like BlueOptima to measure personal productivity?
Answer: Yeah as you can imagine I hate it, both because I think it's unethical and awful, and also because it's actually stupid because it won't drive the outcomes organizations want. I've talked a bunch about the importance and measurable impact of psychological safety and a mission culture: I can't think of anything better designed to destroy that than being told you're a cog in a machine and having your output (not outcomes!) continuously monitored by HR. And just to be clear, I've no experience with BlueOptima in particular, I am just strongly opposed to tools that purport to measure individual productivity.
Question: Do you have tips for communicating these lessons to execs in an organisation to try and adjust software development practices? Is there a TL/DR version of the devops reports I can share with execs?
Answer: Well since you asked we do have a book targeted at execs and managers: https://itrevolution.com/book/accelerate/
However we did design the reports so that you could print out the appropriate pages and leave them on the desk of an exec :).
Question: Do you have any more information, reading etc on how to avoid burnout, or to better limit the amount of burnout a team can feel?
Answer: Our research in this area is based on the work of Dr Christina Maslach. You can see a talk she gave at DevOps Enterprise Summit here: https://www.youtube.com/watch?v=SVlL9TnvphA
Question: Are there any tools/techniques that you've observed developers using at a personal level for their own productivity?
Answer: So I don't think I can give a good answer to this because my observation is that it's quite personal, and different things work for different people. I think the journey is as important as the destination, by which I mean it's important to experiment and see what works for you. Sorry I don't have a better answer. What I would say is that in the talk I discuss the things (like search and reduced tech debt) that drive productivity, and there are definitely great and well-known tools to help with those things.
Question: When you say well known tools for search (I was reading about the hours of waste looking for information), are you referring to collaboration tools like wikis and being able to search BitBucket repos etc? I mean for 'external' search there's some fairly obvious tools :-)
Answer: Here's what we say in the 2019 State of DevOps report: "Internal search: Investments that support document and code creation as well as effective search for company knowledge bases, code repositories, ticketing systems, and other docs contribute to engineering productivity. Those who used internal knowledge sources were 1.73 times more likely to be productive. Providing developers, sysadmins, and support staff with the ability to search internal resources allows them to find answers that are uniquely suited to the work context (for example, using “find similar” functions) and apply solutions faster. In addition, internal knowledge bases that are adequately supported and fostered create opportunities for additional information sharing and knowledge capture." (2019, p62).
What I meant by well known is in the context of tech debt things such as refactoring.
Question: If you were a software development group in the very early stages of the devops journey, what would be the first few things you would focus on?
Answer: Sorry but that's a classic It Depends answer :). I actually co-founded a startup (DORA) where our whole business model was surveying your company to find out where the best place to start was, and it was different for everyone. You can read a case study here: https://services.google.com/fh/files/misc/capitalonecasestudy.pdf. So what I would do is start by discovering your constraint. One useful technique for this is value stream mapping: <a href="https://cloud.google.com/solutions/devops/devops-process-work-visibility-in-value-stream" target="blank">https://cloud.google.com/solutions/devops/devops-process-work-visibility-in-value-stream
Question: As someone who has seen the power of a devops transformation at two different companies (even leading one), I find it hard when I meet people still sceptical of the benefits. Given the work you do with DORA and Google showing the benefits of devops, do you have an opinion on whether we should be looking to encourage mass awareness of these principles (e.g. by including the topic as standard during university degree), or is organic growth the way to go (e.g. company transformation after company transformation, person by person)? (Or option C - something else?)
Answer: It's a good question and a hard one to answer. I certainly share your experience of frustration at people who are skeptical of the benefits, particularly after having spent 6 years on a rigorous program of scientific research. I even have people who still tell me (proper) CI and trunk-based development can't work, what, 20 years in? I am a bit skeptical of teaching it at college because it's such an advanced topic. I teach graduate-level classes at UC Berkeley such as product management, and I'm not sure how I'd put together a class on this (doesn't mean it can't be done). I do wish that we put more of an emphasis on continuing education (rather than certification) in our industry. Part of the problem is systemic: it's possible to spend years in the industry without even coming across some of the capabilities I've talked about. And people think, well, I've done OK so far, it can't be that important! That's partly because our industry is so young and undeveloped in its methods and approaches (manufacturing is about 50 years ahead of us so that's an industry that's interesting to watch) and also because, if I can be permitted to be snarky for a moment, it's an industry full of people who've been told they're the smartest in the room, so it is sometimes hard for us to take a step back and think perhaps what we've been doing isn't actually the best way.
YOU MAY ALSO LIKE: