This post originally appeared as part of the April 4 Intersect newsletter. Click here to view the whole issue, and sign up below to get it delivered to your inbox every week.
The conventional wisdom on hiring for tech positions is that more experience is better, and that it’s more difficult to find people with the right experience as new technologies catch hold. This was the mindset with cloud computing and data science over the past decade, and it’s the mindset now with artificial intelligence and distributed systems (aka Kubernetes). However, it’s probably based more in theory, and even self-imposed restrictions, than in reality.
Bloomberg published an interesting story this week that speaks to one aspect of this situation: the increasingly common requirement of a Ph.D. for certain positions, despite the questionable benefits a terminal degree actually provides outside of academia. It’s the kind of requirement that makes a lot of sense for the relatively small number of industry research labs, but might make significantly less sense for work that requires putting knowledge into practice either building products or working with customer problems.
There’s also another practice that can happen in an attempt to find the “right” employees, which is the kind of age discrimination of which IBM is being accused. According to a lawsuit, the company was laying off older workers while hiring and glorifying younger workers, which it believes are “generally much more innovative and receptive to technology than baby boomers.”
The problem with both approaches—overvaluing academic credentials and undervaluing industry experience—is that they ignore the ground truth of where technology is actually headed and what tends to work in practice. For example, as software development and infrastructure operations become more abstracted, you should need fewer “10x engineers” and fewer operations staff making sure the systems stay up. Rather, you might want more employees who write code reasonably well; understand your business reasonably well; and can use creativity to solve problems or drive new initiatives.
And instead of trying to hire artificial intelligence experts, it might be more useful in many circumstances to train existing employees—both business and technical—on what AI really is and how they can work together to maximize the company’s AI efforts. Even today, there is no shortage of tools that simplify the process of deploying AI models and building AI applications, especially for techniques that are actually viable in production at reasonable scale.
This isn’t so much a commentary on the generalist-versus-specialist debate (although, if you’re interested, this recent interview with Cloudera’s Hilary Mason provides some good insight into how that applies to the integration of data scientists and application teams) as much as it’s a celebration of business sense and customer empathy. Both of those are more important than ever in the era of digital transformation, when everybody is focused on building software-powered user experiences. What sets one application apart from another is how much customers like it, trust it’s safe and can rely on it.
Also whether they trust that the company is committed to keeping it around. Every time a company like Google kills a product, its reputation among consumers takes a hit. Most companies can’t out-engineer Google, but they can commit to building high-quality applications that are backed by deep industry knowledge and an emphasis on customer experience.
About the AuthorMore Content by Derrick Harris