Not many of us in the industry would deny that software has the image of being a young man’s world. Is there truth to this perception? What’re some ideas about what causes it? Can we do anything to help?
Sadly there is some evidence that this isn’t just our imagination. Norman Matloff in this article for the New York Times mentions a survey by the National Science Foundation and Census Bureau where it shows that after graduating with a computer science degree, in 6 years, 57% will be working as programmers; at 20 years (in our 40s) that figure drops to 19%. That’s right, only one in five stick it out into their forties. Let me repeat that… ONE in FIVE. And fortunately (or unfortunately) that decline isn’t seen in with other engineering professions. This is unique to Computer Science. How is it that something so profoundly impactful could stay so unexplained? How can we eventually lose nearly two thirds of each generation of programmers and not have a solid understanding of why this is?
Though this issue isn’t the most visible and forefront discussion in tech, the conversation is being had. Lisa Schmeiser wrote this great article for InfoWorld, where she discusses how things often play out for older programmers in the job market, and also how the “nature of the job” has an impact. (good read) I would agree that this is probably not as important as say, women in tech, but let’s explore what’s happening here. My experience is that diversity topics in tech aren’t as unrelated as one might think on first blush.
So, what are some of the theories about what’s happening? Some say people are “moving up” in their careers, others that people are getting left behind. Maybe we all eventually just get sick of the same bullshit. Let’s go one by one through some of the possibilities.
Could it be that the natural progression of most software developers careers is to eventually “move up” to a leadership role? Maybe we eventually all turn to the “dark side”, and stop calling ourselves “programmers”. This is definitely the explanation that I WANT to be true. Every programmer just naturally, through their career progression, will eventually transcend their humble “software engineer” title, and move “up” to management. However, It does seem somewhat unlikely that there even exist that many positions to fill. Could it be that almost every single developer could eventually become a manager, and not have a whole team of “managers” with no engineers? However, the more I think about it, I have seen some organizations with a lot of layers of middle management.
Tech moves fast
It’s the nature of the beast; tech changes quickly. The skill sets in high-demand this decade are passé the next. Sometimes people just can’t keep up. Let’s illustrate this with a hypothetical: It’s 2007, two developers get a job; Then, under the wise direction of their respective CTOs, one spends the next decade coding in Enyo and the other in ReactJS. The job market and opportunities for these two today are wildly different. We would like to think that both developers resumes receive similar attention, but with computer filtering, and tech recruiters searching for the right keywords and opinionated developers participating in the interview process, we all know they won’t have the same opportunities. Could it be that this has enough of an effect on enough people over time to contribute in a significant way to the drop-off we see?
Some argue that job stability is a problem. There’s a few ways to look at job stability. On one side, start-ups come and go in the software world. That risk, in startups at least, is inherent and likely unavoidable, but it is also a risk that we are aware of when accepting a role in those companies. Looking at startups, however, is just looking at a single type of company. We could also view stability as how long one spends at a job, despite if it’s a small startup or large ‘stable’ company. Here you can see the average number of years Software Engineers stay at a job, broken down by major cities. The average? 1.5 years. If that average holds true throughout a professional programmer’s career, it wouldn’t be outrageous for someone to work at 25 different companies in their 40 years before retirement. What would the side effects of this nomadic lifestyle be? To the family? (stress, moving) To the company? (cost of turnover, ownership, expertise)
Software is not often seen as an 8-5 job. We’ve all pulled our fair share of 13 hour days. Seen some mid-night pull-requests. Go to any conference and you’ll see knickknacks for sale with phrases like “Programmer: a machine that converts coffee into code”[link] and “Eat-Sleep-Code”[link]. Some say a culture like this does more harm than good. Is this a sustainable practice? Is it a behavior that works for people in all stages of their career and life? In the shadow of this this expectation, where successful programmers have to be coding all day and everyday, it’s true that we do occasionally see some people push back. However maybe this could also be pushing people out.
“if you really love your work, it’s easy to work 100-hour weeks!” i really love donuts, but if i eat a whole box, i realize that i’m stupid. –Amy Hoy [twitter @amyhoy]
Of course it’s hard to talk about burnout without touching on work-life balance. Belén Albeza makes good points here about having a life outside of coding, and that this stereotype of long-hours and coding-is-life mentality only applies to a “very specific demographic”. Adding:
“Life happens. People meet other people who become partners. People have kids. People build families. Developers are people.” –@ladybenko
The reference to a conflict between being a “true developer” and having a family or a life, is unsettling. Looking at this idea while asking ourselves “Where are all the older programmers?”… it seems almost too good of a fit; That, or a very compelling coincidence. If this attitude does indeed drive some away, could it also be driving away other demographics? Women? Or perhaps deterring people who might consider switching their career TO programming?
A study from February ‘16 on The Effect of Working Hours on Cognitive Ability at the University of Melbourne found a coloration with the number of hours worked per week and cognitive decline in people over 40. tl,dr; “when working hours exceed 25 hours per week, an increase in working hours has a negative impact on cognition.” Could it be that in the intersection of age, long work hours and cognitively intense work, that burnout strikes especially hard? A kind of perfect storm.
So, what do we do about it?
Maybe it’s just the nature of the job that we burn brightly but burn briefly. Changing an entire industry is hard, maybe impossible. But there is something that each of us can do that could have a personal impact. Regardless of if we accept short careers as a necessary-reality of the industry, or just an unfortunate current-reality. One pragmatic thing we as individuals can do is have a backup plan. Look out for number-1 first. Apply the oxygen mask to yourself before assisting others. We’re privileged to be in an industry that pays considerably higher than an average career, so spend modestly, save aggressively, payoff that house early. We should do whatever makes sense to provide ourselves a safety net to help ease the transition out of the industry, just in case.
Beyond just what an individual can do for herself, there are some ways we can help out at a culture/organizational level. I’ll skip talking about creating work/life balance in the office, since there’s enough there to fill 10 blog posts. But what about job-stability and skill stagnation? There are ways to keep our employees skills current while not expecting them to spend 40 hours a week outside of work learning new technologies. Employee training can benefit both the employee and the employer. Choosing a current technology for a feature or a refactor in an outdated system gives the employee the opportunity to learn a competitive skill, while making it easier for us to find talent in the future. Invest in the employee at the same time as the software. Also thinking about the relationship between talent acquisition and our software can remind us of the cost of losing and acquiring talent. (remember 1.5 year average) When we see a high-risk stack, or a role that would be very expensive to replace every other year, we can be proactive about compensation. I’d rather pay and develop an expert, than have to train someone new (and cheaper) to be good-enough, every 18 months. Developing experts and compensating proactively is a solid approach to giving our developers stability at the job without falling behind the curve. I’m sure there are other great approaches out there too.
There’s a future of the tech industry where we can hold a job for more than two years at a time, without stagnating, and without ignoring our family. There can be a future where each team has a cross section of developers from all stages of their careers and all stages of their lives. Let’s work to create a culture of stability, a culture of individual growth, and a culture of balance. We can get there through acknowledging the value of having a diverse team. We can get there through acknowledging that our developers are people with lives and working within that reality, even supporting it. Finally, let’s ignore the hype of eat sleep code repeat; because that’s bullshit.