This week a debate sprung up on our company Slack around the implications of AI on Developer Relations.
It was prompted by a team member flagging a new AI startup that automates the production of technical documentation. Now, auto-generated docs are not new. Take a look at our blog post about their limitations. However, there is no doubt that increased investment will continue improving generative AI's capabilities.
So how might generative AI affect DevRel?
Our team discussion went something like this:
Is this a threat or an opportunity for DevRel teams?
Will TechEd & support roles become redundant?
Developer Relations is about championing new technology, shouldn’t we embrace this?
How does an organization create institutional knowledge if it's outsourced to an AI engine?
How do you gain customer insight if bots, not people, are interacting with customers?
Are you locked into a vendor if you want to maintain your trained & optimized AI?
Who owns the data?
Does AI for DevRel do its own DevRel?
Most of these questions are not really DevRel specific. They are no different from the questions being asked by other organizations struggling to understand the opportunities and threats posed by generative AI. According to a report from outplacement firm Challenger, Gray & Christmas, just under 4,000 Tech Sector job cuts in May 2023 (5%) were directly related to artificial intelligence.
Ultimately, it comes down to how much you trust an AI model to accurately and authentically represent your company and your brand with your community.
Indeed, trust is at the center of a dispute between Stack Overflow and its network of volunteer moderators. In an open letter announcing strike action signed by more than 1,500 moderators, they write Stack Overflow's policies allowing generative AI,
...allows incorrect information (colloquially referred to as "hallucinations") and plagiarism to proliferate unchecked on the platform. This destroys trust in the platform...
Today the typical DevRel AI use cases revolve around documentation production and support.
Accuracy is so important in a successful Developer Relations program. Many existing programs struggle to maintain mistake-free and up-to-date documentation - it's one of the most common points of friction we find in our Developer Experience Audits.
Does generative AI help to solve or compound this problem?
In AI documentation production, the AI acts as a co-pilot, augmenting the human and taking on routine and boilerplate tasks. In this context, an AI helps speed up the documentation process by the inspection of APIs and code, the part of the process which is most likely to see a staffer introduce inaccuracies or exclude valuable information due to assumptions around the knowledge level of the intended reader.
By utilising AI in documentation production, the DevRel team can focus their effort on adding a layer of meaning and understanding to the docs - the real value add that aids conversion.
In the support context, a well implemented triage bot helps handle simple support requests, improve response times, and reducing the cost to serve. Still, it requires experienced eyes to check for errors and add detail and context. Zendesk, a leader in Customer Support products, who has trained its LLM on 8 billion support tickets, said its LLM will not replace customer service reps. Jon Aniano, SVP of product for CRM applications at Zendesk, said:
...From what I’ve seen in the industry…Currently, the demand for great customer experiences, for emotionally connected customer experience, far outstrips the supply, and automation will bring that back into balance, but it’s not going to eliminate it.
In addition to Developer Experience considerations, sustainable business models must be defined. Stack Overflow and Reddit, cornerstone tools of Developer Relations community engagement, have both announced they will charge LLMs to access their content for training purposes. What happens if LLMs can no longer freely collect data until their machine hearts are content? Will this throttle the adoption of AI in the software development field?
Alongside business models, also consider vendor lock-in and institutional knowledge management. Once an AI model has been trained on your data, how frictionless would it be for you to move to another supplier? How do you manage your knowledge base if the LLM sits outside your organization? If more customer-facing tasks are automated, how do you gather insights and feedback from your customers and act on their feedback?
In the software development context, an AI revolution has even more far-reaching implications. Code writing AI has arrived. Deepmind’s AlphaCode is achieving impressive results, according to Armando Solar-Lezama, head of the computer-assisted programming group at the Massachusetts Institute of Technology:
It’s very impressive, the performance they’re able to achieve on some pretty challenging problems”
And this is evidenced by AlphaCode’s performance in online coding competitions. In online coding contests with at least 5,000 participants, AlphaCode outperformed 45.7% of human programmers.
These advancements could free up professional developers from menial and repetitive tasks, enabling them to tackle more complex work and work at a higher, or more abstract level. One could argue that this trend has been in motion within software development for a number of years, and new applications of AI will be an accelerator of pre-existing developer productivity trends:
APIs, libraries, and frameworks have removed the need for Developers to create everything from scratch, allowing them to focus on value add activities.
Low-code/no-code has been a big area of focus for the major vendors in the last few years.
Natural language and voice interfaces (such as chatbots) already exist, and knowledge management systems have been around for years.
Many areas of software development have been given a turbo boost by AI. It is an accelerant rather than a disruption, resulting in a massive developer productivity surge.
A recent research paper, "The Impact of AI on Developer Productivity:
Evidence from GitHub Copilot" from Microsoft, GitHub, and MIT makes for interesting further reading. It found the treatment group, with access to the AI pair programmer, completed the task of setting up an HTTP server in JavaScript 55.8% faster than the control group.
One thing is for certain, the questions AI poses are fundamental. And no. We are not using AI to write our blog posts, yet ;)
Additional contributions from Mark Cheverton and Syd Lawrence.
Updated 6/16/23 with Gray & Christmas data.
Updated 6/28/23 with Stack Overflow moderator open letter.
Don't miss new posts. Get email notifications by subscribing to our blog.
Comments